Evaluating LLMs and RAG Pipelines at Scale
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Discover how to effectively evaluate Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines in production environments. Explore the unique challenges posed by unstructured outputs and the multitude of parameters involved in these systems. Learn about Valor, an open-source evaluation service, and its role in facilitating rigorous, real-world testing. Gain insights into integrating evaluation processes into existing LLMOps tech stacks, enabling teams to determine the optimal LLM model and parameters for specific tasks and datasets. Delve into strategies for addressing the complexities of LLM evaluation, including prompt templates, document chunking strategies, and embedding models.
Syllabus
Evaluating LLMs and RAG Pipelines at Scale
Taught by
MLOps World: Machine Learning in Production
Related Courses
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2Pinecone via YouTube Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube