YoVDO

Evaluating Quality and Improving LLM Products at Scale

Offered By: MLOps.community via YouTube

Tags

MLOps Courses Quality Assurance Courses Prompt Engineering Courses Generative AI Courses Product Development Courses Software Engineering Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for evaluating and enhancing large language model (LLM) products at scale in this 15-minute conference talk by Austin Bell at the AI in Production Conference. Learn how to measure the impact of prompt changes and pre-processing techniques on LLM output quality, enabling confident deployment of product improvements. Gain insights from Bell's experience as a Staff Software Engineer at Slack, focusing on developing text-based ML and Generative AI products. Discover methods to ensure consistent enhancement of generative products through effective evaluation and measurement techniques.

Syllabus

Evaluating Quality and Improving LLM Products at Scale // Austin Bell // AI in Production Conference


Taught by

MLOps.community

Related Courses

Building and Managing Superior Skills
State University of New York via Coursera
ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique
Digital Skills: Artificial Intelligence
Accenture via FutureLearn
AI Foundations for Everyone
IBM via Coursera
Design a Feminist Chatbot
Institute of Coding via FutureLearn