RAG Has Been Oversimplified - Exploring Complexities in Retrieval Augmented Generation
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the complexities of Retrieval Augmented Generation (RAG) in this 49-minute MLOps podcast episode featuring Yujian Tang, Developer Advocate at Zilliz. Delve into the nuanced challenges developers face when implementing RAG, moving beyond industry oversimplifications. Learn about embedding vector databases, the consensus on large and small language models, and the intricacies of QA bots. Discover critical components of the RAG stack, including citation building, context vs. relevance, and similarity search. Examine RAG optimization techniques, discuss scenarios where RAG may not be suitable, and explore multimodal RAG applications. Gain insights into fashion app development and video citation methods while understanding the trade-offs in LLM interactions.
Syllabus
[] Yujian's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] The hero of the LLM space
[] Embeddings into Vector databases
[] What is large and what is small LLM consensus
[] QA Bot behind the scenes
[] Fun fact getting more context
[] RAGs eliminate the ability of LLMs to hallucinate
[] Critical part of the rag stack
[] Building citations
[] Difference between context and relevance
[] Missing prompt tooling
[] Similarity search
[] RAG Optimization
[] Interacting with LLMs and tradeoffs
[] RAGs not suited for
[] Fashion App
[] Multimodel Rags vs LLM RAGs
[] Multimodel use cases
[] Video citations
[] Wrap up
Taught by
MLOps.community
Related Courses
Natural Language Processing: NLP With Transformers in PythonUdemy Locality Sensitive Hashing for Search with Shingling + MinHashing - Python
James Briggs via YouTube Hugging Face Datasets - Dataset Builder Scripts for Beginners
James Briggs via YouTube Choosing Indexes for Similarity Search - Faiss in Python
James Briggs via YouTube FAISS - Introduction to Similarity Search
James Briggs via YouTube