Navigating via Retrieval Evaluation to Demystify LLM Wonderland
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the critical role of retrieval evaluation in Language Model (LLM)-based applications like RAG in this 13-minute conference talk by Atita Arora, presented at the MLOps.community AI in Production event. Delve into the correlation between retrieval accuracy and answer quality, and understand the importance of thorough evaluation methodologies. Learn from Arora's 15 years of experience as a Solution Architect and Search Relevance strategist as she shares insights on decoding complex business challenges and pioneering innovative information retrieval solutions. Gain valuable knowledge about evaluating RAGs while navigating the world of vectors and LLMs, and discover how these insights can enhance practical applications and effectiveness in real-world problem-solving.
Syllabus
Navigating via Retrieval Evaluation to Demystify LLM Wonderland // Atita Arora // AI in Production
Taught by
MLOps.community
Related Courses
Machine Learning Operations (MLOps): Getting StartedGoogle Cloud via Coursera Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera Demystifying Machine Learning Operations (MLOps)
Pluralsight Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera