Chatbots with RAG - LangChain Full Walkthrough
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to build a chatbot using Retrieval Augmented Generation (RAG) in this comprehensive video tutorial. Explore the entire process from start to finish, utilizing OpenAI's gpt-3.5-turbo Large Language Model (LLM) as the core engine. Implement the chatbot with LangChain's ChatOpenAI class, leverage OpenAI's text-embedding-ada-002 for embedding, and use Pinecone vector database as the knowledge base. Gain insights into RAG pipelines, understand the challenges of hallucinations in LLMs, and discover techniques to reduce them. Follow along as the tutorial guides you through adding context to prompts, building a vector database, and integrating RAG into your chatbot. Test the final RAG chatbot and learn important considerations when implementing RAG in your projects.
Syllabus
Chatbots with RAG
RAG Pipeline
Hallucinations in LLMs
LangChain ChatOpenAI Chatbot
Reducing LLM Hallucinations
Adding Context to Prompts
Building the Vector Database
Adding RAG to Chatbot
Testing the RAG Chatbot
Important Notes when using RAG
Taught by
James Briggs
Related Courses
Better Llama with Retrieval Augmented Generation - RAGJames Briggs via YouTube Live Code Review - Pinecone Vercel Starter Template and Retrieval Augmented Generation
Pinecone via YouTube Nvidia's NeMo Guardrails - Full Walkthrough for Chatbots - AI
James Briggs via YouTube Hugging Face LLMs with SageMaker - RAG with Pinecone
James Briggs via YouTube Supercharge Your LLM Applications with RAG
Data Science Dojo via YouTube