YoVDO

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality

Offered By: Databricks via YouTube

Tags

E-commerce Courses Prompt Engineering Courses Information Retrieval Courses MLFlow Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore innovative techniques for evaluating and improving fine-tuned Large Language Models (LLMs) in this 33-minute conference talk by Mirakl data scientists Loic Pauletto and Pierre Lourdelet. Delve into the challenges of information retrieval from E-commerce product data sheets and learn how Mirakl developed a solution using fine-tuned LLMs. Discover qualitative evaluation methods, including language model quality metrics and hallucination detection. Understand how to leverage MLflow for automating LLM evaluation and monitoring. Gain insights into iterative quality improvement strategies through prompt engineering and dataset refinement. Learn how these methods enable rapid iteration on prompts and fine-tuned models to achieve production-level trustworthiness. Access additional resources such as the LLM Compact Guide and Big Book of MLOps to further expand your knowledge in this field.

Syllabus

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality


Taught by

Databricks

Related Courses

Basics of e-Commerce
Canvas Network
Foundations of E-Commerce
Nanyang Technological University via Coursera
التجارة الالكترونية
Rwaq (رواق)
App Monetization
Google via Udacity
Capstone - Launch Your Own Business!
Michigan State University via Coursera