Evaluating LLM Applications - Insights from Shahul Es
Offered By: MLOps.community via YouTube
Course Description
Overview
Dive into a comprehensive 51-minute podcast episode featuring Shahul Es, a data science expert and Kaggle Grandmaster, as he explores the intricacies of evaluating Large Language Model (LLM) applications. Learn about debugging techniques, troubleshooting strategies, and the challenges associated with benchmarks in open-source models. Gain valuable insights on custom data distributions, the significance of fine-tuning in improving model performance, and the Ragas Project. Discover the importance of evaluation metrics, the impact of gamed leaderboards, and strategies for recommending effective evaluation processes. Explore topics such as prompt injection, alignment, and the concept of "garbage in, garbage out" in LLM applications. Connect with the MLOps community through various channels and access additional resources, including job boards and merchandise.
Syllabus
[] Shahul's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] Shahul's definition of Evaluation
[] Evaluation metrics and Benchmarks
[] Gamed leaderboards
[] Best at summarizing long text open-source models
[] Benchmarks
[] Recommending evaluation process
[] LLMs for other LLMs
[] Debugging failed evaluation models
[] Prompt injection
[] Alignment
[] Open Assist
[] Garbage in, garbage out
[] Ragas
[] Valuable use case besides Open AI
[] Fine-tuning LLMs
[] Connect with Shahul if you need help with Ragas @Shahules786 on Twitter
[] Wrap up
Taught by
MLOps.community
Related Courses
Machine Learning Operations (MLOps): Getting StartedGoogle Cloud via Coursera Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera Demystifying Machine Learning Operations (MLOps)
Pluralsight Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera