Evaluating Quality and Improving LLM Products at Scale
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore strategies for evaluating and enhancing large language model (LLM) products at scale in this 15-minute conference talk by Austin Bell at the AI in Production Conference. Learn how to measure the impact of prompt changes and pre-processing techniques on LLM output quality, enabling confident deployment of product improvements. Gain insights from Bell's experience as a Staff Software Engineer at Slack, focusing on developing text-based ML and Generative AI products. Discover methods to ensure consistent enhancement of generative products through effective evaluation and measurement techniques.
Syllabus
Evaluating Quality and Improving LLM Products at Scale // Austin Bell // AI in Production Conference
Taught by
MLOps.community
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy