Semantic Chunking for RAG - Improving Retrieval Augmented Generation Pipelines
Offered By: James Briggs via YouTube
Course Description
Overview
Explore semantic chunking for Retrieval Augmented Generation (RAG) in this 30-minute video tutorial. Learn how to build more concise chunks for RAG pipelines, chatbots, and AI agents, and pair them with various LLMs and embedding models. Discover the process of semantic chunking in Python, adding context to chunks, providing LLMs with more context, indexing chunks, creating chunks for the LLM, and querying for chunks. Access the accompanying code on GitHub and join the AI community through Discord, Twitter, and LinkedIn for further discussions and updates.
Syllabus
Semantic Chunking for RAG
What is Semantic Chunking
Semantic Chunking in Python
Adding Context to Chunks
Providing LLMs with More Context
Indexing our Chunks
Creating Chunks for the LLM
Querying for Chunks
Taught by
James Briggs
Related Courses
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2Pinecone via YouTube Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube