YoVDO

LLM Evaluation Framework for Crafting Delightful Content from Messy Inputs

Offered By: MLOps World: Machine Learning in Production via YouTube

Tags

Machine Learning Courses Canva Courses MLOps Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an evaluation framework for assessing the quality of Large Language Model (LLM) outputs in transforming diverse and messy textual inputs into refined content. This 32-minute conference talk by Shin Liang, Senior Machine Learning Engineer at Canva, delves into the challenges of objectively evaluating LLM outcomes in subjective and unstructured tasks. Learn about general evaluation metrics like relevance, fluency, and coherence, as well as specific metrics such as information preservation rate, accuracy of title/heading understanding, and key information extraction scores. Discover how this framework can be applied to similar LLM tasks, providing valuable insights for crafting high-quality content from complex inputs.

Syllabus

LLM Evaluation to Craft Delightful Content From Messy Inputs


Taught by

MLOps World: Machine Learning in Production

Related Courses

Machine Learning Operations (MLOps): Getting Started
Google Cloud via Coursera
Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera
Demystifying Machine Learning Operations (MLOps)
Pluralsight
Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity
Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera