Observing a Large Language Model in Production
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the challenges and solutions of implementing and monitoring a Large Language Model (LLM) in production through this insightful conference talk. Discover how Honeycomb tackled the unique obstacles presented by non-deterministic and inherently unreliable LLM APIs. Learn about effective instrumentation techniques, key performance indicators, and the establishment of Service Level Objectives (SLOs) for LLM-powered features. Gain valuable insights into measuring and iterating on improvements, blending prompt engineering with observability practices to enhance product quality. Acquire practical knowledge on how to effectively monitor and optimize LLM-based features in a production environment, enabling you to build more robust and reliable AI-powered applications.
Syllabus
Observing a Large Language Model in Production - Phillip Carter, Honeycomb
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Elastic Cloud Infrastructure: Containers and ServicesGoogle Cloud via Coursera Microsoft Azure App Service
Microsoft via edX API Design and Fundamentals of Google Cloud's Apigee API Platform
Google Cloud via Coursera API Development on Google Cloud's Apigee API Platform
Google Cloud via Coursera On Premises Installation and Fundamentals with Google Cloud's Apigee API Platform
Google Cloud via Coursera