YoVDO

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality

Offered By: Databricks via YouTube

Tags

E-commerce Courses Prompt Engineering Courses Information Retrieval Courses MLFlow Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore innovative techniques for evaluating and improving fine-tuned Large Language Models (LLMs) in this 33-minute conference talk by Mirakl data scientists Loic Pauletto and Pierre Lourdelet. Delve into the challenges of information retrieval from E-commerce product data sheets and learn how Mirakl developed a solution using fine-tuned LLMs. Discover qualitative evaluation methods, including language model quality metrics and hallucination detection. Understand how to leverage MLflow for automating LLM evaluation and monitoring. Gain insights into iterative quality improvement strategies through prompt engineering and dataset refinement. Learn how these methods enable rapid iteration on prompts and fine-tuned models to achieve production-level trustworthiness. Access additional resources such as the LLM Compact Guide and Big Book of MLOps to further expand your knowledge in this field.

Syllabus

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality


Taught by

Databricks

Related Courses

Predicción del fraude bancario con autoML y Pycaret
Coursera Project Network via Coursera
Clasificación de datos de Satélites con autoML y Pycaret
Coursera Project Network via Coursera
Regresión (ML) en la vida real con PyCaret
Coursera Project Network via Coursera
ML Pipelines on Google Cloud
Google Cloud via Coursera
ML Pipelines on Google Cloud
Pluralsight