Navigating via Retrieval Evaluation to Demystify LLM Wonderland
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the critical role of retrieval evaluation in Language Model (LLM)-based applications like RAG in this 13-minute conference talk by Atita Arora, presented at the MLOps.community AI in Production event. Delve into the correlation between retrieval accuracy and answer quality, and understand the importance of thorough evaluation methodologies. Learn from Arora's 15 years of experience as a Solution Architect and Search Relevance strategist as she shares insights on decoding complex business challenges and pioneering innovative information retrieval solutions. Gain valuable knowledge about evaluating RAGs while navigating the world of vectors and LLMs, and discover how these insights can enhance practical applications and effectiveness in real-world problem-solving.
Syllabus
Navigating via Retrieval Evaluation to Demystify LLM Wonderland // Atita Arora // AI in Production
Taught by
MLOps.community
Related Courses
Qdrant - A Vector Search Engine in RustRust via YouTube Introduction to Retrieval Augmented Generation (RAG)
Duke University via Coursera Advanced RAG with Llama 3 in LangChain - Building a PDF Chat System
Venelin Valkov via YouTube Hands-On AI: RAG using LlamaIndex
LinkedIn Learning Local RAG with Llama 3.1 for PDFs - Private Chat with Documents using LangChain and Streamlit
Venelin Valkov via YouTube