Fixing LLM Hallucinations with Retrieval Augmentation in LangChain
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to address the data freshness problem in Large Language Models (LLMs) through retrieval augmentation using LangChain and Pinecone vector database. Explore techniques to retrieve relevant information from external knowledge bases, enabling LLMs to access up-to-date information beyond their training data. Discover the process of data preprocessing, creating embeddings with OpenAI's Ada 002, setting up a Pinecone vector database, indexing data, and implementing generative question-answering with LangChain. Gain insights into adding citations to generated answers and understand the importance of retrieval augmentation in enhancing LLM performance.
Syllabus
Hallucination in LLMs
Types of LLM Knowledge
Data Preprocessing with LangChain
Creating Embeddings with OpenAI's Ada 002
Creating the Pinecone Vector Database
Indexing Data into Our Database
Querying with LangChain
Generative Question-Answering with LangChain
Adding Citations to Generated Answers
Summary of Retrieval Augmentation in LangChain
Taught by
James Briggs
Related Courses
Genomic Data Science and Clustering (Bioinformatics V)University of California, San Diego via Coursera 用Python玩转数据 Data Processing Using Python
Nanjing University via Coursera Data Mining Project
University of Illinois at Urbana-Champaign via Coursera Advanced Business Analytics Capstone
University of Colorado Boulder via Coursera Data Mining: Theories and Algorithms for Tackling Big Data | 数据挖掘:理论与算法
Tsinghua University via edX