RAG Has Been Oversimplified - Exploring Complexities in Retrieval Augmented Generation
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the complexities of Retrieval Augmented Generation (RAG) in this 49-minute MLOps podcast episode featuring Yujian Tang, Developer Advocate at Zilliz. Delve into the nuanced challenges developers face when implementing RAG, moving beyond industry oversimplifications. Learn about embedding vector databases, the consensus on large and small language models, and the intricacies of QA bots. Discover critical components of the RAG stack, including citation building, context vs. relevance, and similarity search. Examine RAG optimization techniques, discuss scenarios where RAG may not be suitable, and explore multimodal RAG applications. Gain insights into fashion app development and video citation methods while understanding the trade-offs in LLM interactions.
Syllabus
[] Yujian's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] The hero of the LLM space
[] Embeddings into Vector databases
[] What is large and what is small LLM consensus
[] QA Bot behind the scenes
[] Fun fact getting more context
[] RAGs eliminate the ability of LLMs to hallucinate
[] Critical part of the rag stack
[] Building citations
[] Difference between context and relevance
[] Missing prompt tooling
[] Similarity search
[] RAG Optimization
[] Interacting with LLMs and tradeoffs
[] RAGs not suited for
[] Fashion App
[] Multimodel Rags vs LLM RAGs
[] Multimodel use cases
[] Video citations
[] Wrap up
Taught by
MLOps.community
Related Courses
TensorFlow on Google CloudGoogle Cloud via Coursera Art and Science of Machine Learning 日本語版
Google Cloud via Coursera Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera Art and Science of Machine Learning em Português Brasileiro
Google Cloud via Coursera Art and Science of Machine Learning en Español
Google Cloud via Coursera