YoVDO

Observing a Large Language Model in Production

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Observability Courses Prompt Engineering Courses API Management Courses Service-Level Objectives Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions of implementing and monitoring a Large Language Model (LLM) in production through this insightful conference talk. Discover how Honeycomb tackled the unique obstacles presented by non-deterministic and inherently unreliable LLM APIs. Learn about effective instrumentation techniques, key performance indicators, and the establishment of Service Level Objectives (SLOs) for LLM-powered features. Gain valuable insights into measuring and iterating on improvements, blending prompt engineering with observability practices to enhance product quality. Acquire practical knowledge on how to effectively monitor and optimize LLM-based features in a production environment, enabling you to build more robust and reliable AI-powered applications.

Syllabus

Observing a Large Language Model in Production - Phillip Carter, Honeycomb


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Developing a Google SRE Culture
Google Cloud via Coursera
Site Reliability Engineering: Measuring and Managing Reliability
Pluralsight
Site Reliability Engineering: Measuring and Managing Reliability
Pluralsight
Developing a Google SRE Culture en Français
Google Cloud via Coursera
Identifying and Resolving Application Latency for Site Reliability Engineers
Google Cloud via Coursera