Lies, Damned Lies, and Large Language Models - Measuring and Reducing Hallucinations
Offered By: EuroPython Conference via YouTube
Course Description
Overview
Explore the challenges and solutions surrounding large language models' (LLMs) tendency to produce incorrect information or "hallucinate" in this 29-minute conference talk from EuroPython 2024. Delve into the main causes of hallucinations in LLMs and learn how to measure specific types of misinformation using the TruthfulQA dataset. Discover practical techniques for assessing hallucination rates and comparing different models using Python tools like Hugging Face's `datasets` and `transformers` packages, as well as the `langchain` package. Gain insights into recent initiatives aimed at reducing hallucinations, with a focus on retrieval augmented generation (RAG) and its potential to enhance the reliability and usability of LLMs across various contexts.
Syllabus
Lies, damned lies and large language models — Jodie Burchell
Taught by
EuroPython Conference
Related Courses
Prompt Templates for GPT-3.5 and Other LLMs - LangChainJames Briggs via YouTube Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube Chat in LangChain
James Briggs via YouTube LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube