RAG Has Been Oversimplified - Exploring Complexities in Retrieval Augmented Generation
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the complexities of Retrieval Augmented Generation (RAG) in this 49-minute MLOps podcast episode featuring Yujian Tang, Developer Advocate at Zilliz. Delve into the nuanced challenges developers face when implementing RAG, moving beyond industry oversimplifications. Learn about embedding vector databases, the consensus on large and small language models, and the intricacies of QA bots. Discover critical components of the RAG stack, including citation building, context vs. relevance, and similarity search. Examine RAG optimization techniques, discuss scenarios where RAG may not be suitable, and explore multimodal RAG applications. Gain insights into fashion app development and video citation methods while understanding the trade-offs in LLM interactions.
Syllabus
[] Yujian's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] The hero of the LLM space
[] Embeddings into Vector databases
[] What is large and what is small LLM consensus
[] QA Bot behind the scenes
[] Fun fact getting more context
[] RAGs eliminate the ability of LLMs to hallucinate
[] Critical part of the rag stack
[] Building citations
[] Difference between context and relevance
[] Missing prompt tooling
[] Similarity search
[] RAG Optimization
[] Interacting with LLMs and tradeoffs
[] RAGs not suited for
[] Fashion App
[] Multimodel Rags vs LLM RAGs
[] Multimodel use cases
[] Video citations
[] Wrap up
Taught by
MLOps.community
Related Courses
Generative AI, from GANs to CLIP, with Python and PytorchUdemy ODSC East 2022 Keynote by Luis Vargas, Ph.D. - The Big Wave of AI at Scale
Open Data Science via YouTube Comparing AI Image Caption Models: GIT, BLIP, and ViT+GPT2
1littlecoder via YouTube In Conversation with the Godfather of AI
Collision Conference via YouTube LLaVA: The New Open Access Multimodal AI Model
1littlecoder via YouTube