Better Llama with Retrieval Augmented Generation - RAG
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to enhance Llama 2 using Retrieval Augmented Generation (RAG) in this informative tutorial video. Discover how RAG keeps Large Language Models up-to-date, reduces hallucinations, and enables source citation. Follow along as the instructor builds a RAG pipeline using Pinecone vector database, Llama 2 13B chat model, and Hugging Face and LangChain code. Explore topics such as Python prerequisites, Llama 2 access, RAG fundamentals, creating embeddings with open-source tools, building a Pinecone vector database, initializing Llama 2, and comparing standard Llama 2 with RAG-enhanced Llama 2. Gain practical insights into implementing RAG for improved AI performance and accuracy.
Syllabus
Retrieval Augmented Generation with Llama 2
Python Prerequisites and Llama 2 Access
Retrieval Augmented Generation 101
Creating Embeddings with Open Source
Building Pinecone Vector DB
Creating Embedding Dataset
Initializing Llama 2
Creating the RAG RetrievalQA Component
Comparing Llama 2 vs RAG Llama 2
Taught by
James Briggs
Related Courses
Prompt Templates for GPT-3.5 and Other LLMs - LangChainJames Briggs via YouTube Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube Chat in LangChain
James Briggs via YouTube LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube