How to Evaluate Enterprise LLMs in Snorkel Flow
Offered By: Snorkel AI via YouTube
Course Description
Overview
Discover how to evaluate enterprise Large Language Models (LLMs) using Snorkel Flow in this 20-minute demonstration video. Follow along as Snorkel AI software engineer Rebecca Westerlind guides you through the iterative loop at the core of Snorkel Flow's AI data development workflow. Learn to build a robust evaluation framework, leverage Snorkel's features for creating high-quality training data, and analyze LLM performance across various metrics and data slices. Gain practical insights into LLM evaluation, Snorkel Flow usage, and enterprise LLM deployment. This demo, an excerpt from a longer webinar, provides a step-by-step process to help you bridge the gap between demonstration and production-ready enterprise LLM applications.
Syllabus
DEMO: How to Evaluate Enterprise LLMs in Snorkel Flow
Taught by
Snorkel AI
Related Courses
How to Optimize RAG Pipelines for Domain- and Enterprise-Specific TasksSnorkel AI via YouTube LLM Evaluation for Production Enterprise Applications
Snorkel AI via YouTube Aligning Large Language Models for Enterprise Applications in Snorkel Flow - Demo
Snorkel AI via YouTube How to Accelerate AI Training With Programmatic Data Labeling - Snorkel Flow Demo
Snorkel AI via YouTube New in Snorkel Flow 2024.R1: Enhanced Security, Image Categorization, and More
Snorkel AI via YouTube