How to Systematically Test and Evaluate LLM Apps - MLOps Podcast
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore a comprehensive podcast episode featuring Gideon Mendels, CEO of Comet, discussing systematic testing and evaluation of LLM applications. Gain insights into hybrid approaches combining ML and software engineering best practices, defining evaluation metrics, and tracking experimentation for LLM app development. Learn about comprehensive unit testing strategies for confident deployment, and discover the importance of managing machine learning workflows from experimentation to production. Delve into topics such as LLM evaluation methodologies, AI metrics integration, experiment tracking, collaborative approaches, and anomaly detection in model outputs. Benefit from Mendels' expertise in NLP, speech recognition, and ML research as he shares valuable insights for developers working with LLM applications.
Syllabus
[] Gideon's preferred coffee
[] Takeaways
[] A huge shout-out to Comet ML for sponsoring this episode!
[] Please like, share, leave a review, and subscribe to our MLOps channels!
[] Evaluation metrics in AI
[] LLM Evaluation in Practice
[] LLM testing methodologies
[] LLM as a judge
[] OPIC track function overview
[] Tracking user response value
[] Exploring AI metrics integration
[] Experiment tracking and LLMs
[] Micro Macro collaboration in AI
[] RAG Pipeline Reproducibility Snapshot
[] Collaborative experiment tracking
[] Feature flags in CI/CD
[] Labeling challenges and solutions
[] LLM output quality alerts
[] Anomaly detection in model outputs
[] Wrap up
Taught by
MLOps.community
Related Courses
Intro to Computer ScienceUniversity of Virginia via Udacity Software Engineering for SaaS
University of California, Berkeley via Coursera CS50's Introduction to Computer Science
Harvard University via edX UNSW Computing 1 - The Art of Programming
OpenLearning Mobile Robotics
Open2Study