Building Trust with Large Language Models - Evaluation Techniques
Offered By: DevConf via YouTube
Course Description
Overview
Explore the intricacies of evaluating Large Language Models (LLMs) in this 34-minute conference talk from DevConf.US 2024. Delve into the fundamental aspects of LLM assessment, starting with traditional metrics like ROUGE and BLEU scores, and progressing to advanced techniques such as model-based evaluation using LangChain criteria metrics. Examine human-based evaluation methods and various evaluation benchmarks. Through a text generation demo application, compare different evaluation techniques, highlighting their advantages and disadvantages. Address common challenges in assessing LLM quality and learn strategies to overcome them. Gain a comprehensive understanding of LLM evaluation techniques to build trust and effectively implement these models in real-world applications. Presented by speakers Surya Pathak and Hema Veeradhi, this talk equips you with practical insights for navigating the complexities of LLM implementation in the open-source world.
Syllabus
Building Trust with LLMs - DevConf.US 2024
Taught by
DevConf
Related Courses
Intro to Deep Learning with PyTorchFacebook via Udacity Natural Language Processing with Sequence Models
DeepLearning.AI via Coursera Deep Learning
Universidad AnĂ¡huac via edX Create a Superhero Name Generator with TensorFlow
Coursera Project Network via Coursera Natural Language Generation in Python
DataCamp