Examining the Principles of Observability and Its Relevance in LLM Applications
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the principles of observability and their application in Large Language Model (LLM) applications in this 19-minute conference talk by Guangya Liu and Jean Detoeuf from IBM. Gain insights into the importance of monitoring AI behaviors as LLMs become increasingly prevalent in various applications. Discover why users demand transparency in AI decision-making processes and how observability addresses these concerns. Learn about key metrics to observe in LLM applications, including model latency, cost, and tracking. Examine emerging technologies such as Traceloop, OpenTelemetry, and Langfuse, and understand how to leverage these tools for analytics, monitoring, and optimization of LLM applications. Delve into the methods for refining LLM performance, uncovering biases, troubleshooting problems, and ensuring AI reliability and trustworthiness through effective observability practices.
Syllabus
Examining the Principles of Observability and Its Relevance in LLM... - Guangya Liu & Jean Detoeuf
Taught by
Linux Foundation
Tags
Related Courses
.NET Diagnostics for Applications: Best PracticesPluralsight OpenTelemetry Course - Understand Software Performance
freeCodeCamp Monitoring and Observability for Application Developers
IBM via edX Distributed Tracing in .NET 6 using OpenTelemetry
NDC Conferences via YouTube Application Diagnostics in .NET Core 3.1
NDC Conferences via YouTube