LLM Evaluation Framework for Crafting Delightful Content from Messy Inputs
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore an evaluation framework for assessing the quality of Large Language Model (LLM) outputs in transforming diverse and messy textual inputs into refined content. This 32-minute conference talk by Shin Liang, Senior Machine Learning Engineer at Canva, delves into the challenges of objectively evaluating LLM outcomes in subjective and unstructured tasks. Learn about general evaluation metrics like relevance, fluency, and coherence, as well as specific metrics such as information preservation rate, accuracy of title/heading understanding, and key information extraction scores. Discover how this framework can be applied to similar LLM tasks, providing valuable insights for crafting high-quality content from complex inputs.
Syllabus
LLM Evaluation to Craft Delightful Content From Messy Inputs
Taught by
MLOps World: Machine Learning in Production
Related Courses
Introduction to Graphic DesignCanva via OpenLearning Passport to Canvas (Grades 6-12/HE)
Canvas Network Use Canva to Create Social Media Marketing Designs
Coursera Project Network via Coursera Create a Business Marketing Brand Kit Using Canva
Coursera Project Network via Coursera Use Canva to Create an Interactive Mind Map
Coursera Project Network via Coursera