Hands-On AI: RAG using LlamaIndex
Offered By: LinkedIn Learning
Course Description
Overview
Learn how to enhance AI query capabilities and data accuracy through the application of LlamaIndex in retrieval-augmented generation processes.
Syllabus
Introduction
- Overcome the limitations of LLMs with RAG
- Limitations of LLMs
- Use cases for retrieval-augmented generation (RAG)
- Using GitHub Codespaces
- Setting up your environment
- Choosing an LLM and embeddings provider
- Setting up LLM accounts
- Choosing a vector database
- Setting up a Qdrant account
- Downloading our data
- How LlamaIndex is organized
- Using LLMs
- Loading data
- Indexing
- Storing and retrieving
- Querying
- Agents
- Components of a RAG system
- Ingestion pipeline
- Query pipeline
- Prompt engineering for RAG
- Data preparation for RAG
- Putting it all together
- Drawbacks of Naive RAG
- Introduction to RAG evaluation
- Evaluation metrics
- How to create an evaluation set
- How we can improve on Naive RAG
- Optimizing chunk size
- Small to big retrieval
- Semantic chunking
- Metadata extraction
- Document summary index
- Query transformation
- Node post-processing
- Re-ranking
- FLARE
- Prompt compression
- Self-correcting
- Hybrid retrieval
- Agentic RAG
- Ensemble retrieval
- Ensemble query engine
- LlamaIndex evaluation
- Comparative analysis of retrieval-augmented generation techniques
Taught by
Harpreet Sahota
Related Courses
Vector Similarity SearchData Science Dojo via YouTube Supercharging Semantic Search with Pinecone and Cohere
Pinecone via YouTube Search Like You Mean It - Semantic Search with NLP and a Vector Database
Pinecone via YouTube The Rise of Vector Data
Pinecone via YouTube NER Powered Semantic Search in Python
James Briggs via YouTube