Observability for LLMs - Lightning Talk
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore observability techniques for Large Language Models (LLMs) in this lightning talk from the LLMs in Production Conference. Learn why traditional debugging and unit testing methods fall short for LLMs and discover how to implement observability practices to enhance reliability. Gain insights into instrumenting features for rich telemetry, analyzing behavior from collected data, and leveraging observability as a key source for evaluations. Delve into topics such as natural language processing, distributed tracing, monitoring end-user experiences, and utilizing Open Telemetry. Presented by Phillip Carter, an OpenTelemetry maintainer and AI initiatives leader at Honeycomb, this talk offers valuable knowledge for those working on making LLMs more dependable in production environments.
Syllabus
Intro
Welcome
Observability
Natural Language
Results
Example
Distributed tracing
Monitoring end user experience
Open Telemetry
Taught by
MLOps.community
Related Courses
Machine Learning Operations (MLOps): Getting StartedGoogle Cloud via Coursera Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera Demystifying Machine Learning Operations (MLOps)
Pluralsight Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera