Chatbots with RAG - LangChain Full Walkthrough
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to build a chatbot using Retrieval Augmented Generation (RAG) in this comprehensive video tutorial. Explore the entire process from start to finish, utilizing OpenAI's gpt-3.5-turbo Large Language Model (LLM) as the core engine. Implement the chatbot with LangChain's ChatOpenAI class, leverage OpenAI's text-embedding-ada-002 for embedding, and use Pinecone vector database as the knowledge base. Gain insights into RAG pipelines, understand the challenges of hallucinations in LLMs, and discover techniques to reduce them. Follow along as the tutorial guides you through adding context to prompts, building a vector database, and integrating RAG into your chatbot. Test the final RAG chatbot and learn important considerations when implementing RAG in your projects.
Syllabus
Chatbots with RAG
RAG Pipeline
Hallucinations in LLMs
LangChain ChatOpenAI Chatbot
Reducing LLM Hallucinations
Adding Context to Prompts
Building the Vector Database
Adding RAG to Chatbot
Testing the RAG Chatbot
Important Notes when using RAG
Taught by
James Briggs
Related Courses
TensorFlow for NLP: Text Embedding and ClassificationCoursera Project Network via Coursera Google Sites Essential Training
LinkedIn Learning 2024 Advanced Machine Learning and Deep Learning Projects
Udemy Intro to Multi-Modal ML with OpenAI's CLIP
James Briggs via YouTube OpenAI Python API Bootcamp: Learn to use AI, GPT, and more!
Udemy