Methods for Evaluating Your GenAI Application Quality
Offered By: Databricks via YouTube
Course Description
Overview
Explore comprehensive methods for evaluating Generative AI application quality in this 37-minute conference talk by Databricks. Dive into the suite of tools including inference tables, Lakehouse Monitoring, and MLflow for rigorous evaluation and quality assurance of model responses. Learn to conduct offline evaluations and real-time monitoring, ensuring high-performance standards. Discover best practices for using LLMs as judges, integrating MLflow for experiment tracking, and leveraging inference tables and Lilac for enhanced model management. Optimize workflows and ensure robust, scalable GenAI applications aligned with production goals. Presented by Alkis Polyzotis and Michael Carbin, this talk offers valuable insights for developers and data scientists working with Generative AI technologies.
Syllabus
Methods for Evaluating Your GenAI Application Quality
Taught by
Databricks
Related Courses
Data Processing with AzureLearnQuest via Coursera Mejores prácticas para el procesamiento de datos en Big Data
Coursera Project Network via Coursera Data Science with Databricks for Data Analysts
Databricks via Coursera Azure Data Engineer con Databricks y Azure Data Factory
Coursera Project Network via Coursera Curso Completo de Spark con Databricks (Big Data)
Coursera Project Network via Coursera