YoVDO

Lies, Damned Lies, and Large Language Models - Measuring and Reducing Hallucinations

Offered By: EuroPython Conference via YouTube

Tags

Python Courses LangChain Courses Transformers Courses Misinformation Courses Hugging Face Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions surrounding large language models' (LLMs) tendency to produce incorrect information or "hallucinate" in this 29-minute conference talk from EuroPython 2024. Delve into the main causes of hallucinations in LLMs and learn how to measure specific types of misinformation using the TruthfulQA dataset. Discover practical techniques for assessing hallucination rates and comparing different models using Python tools like Hugging Face's `datasets` and `transformers` packages, as well as the `langchain` package. Gain insights into recent initiatives aimed at reducing hallucinations, with a focus on retrieval augmented generation (RAG) and its potential to enhance the reliability and usability of LLMs across various contexts.

Syllabus

Lies, damned lies and large language models — Jodie Burchell


Taught by

EuroPython Conference

Related Courses

Pinecone Vercel Starter Template and RAG - Live Code Review Part 2
Pinecone via YouTube
Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube
RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube
Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube
LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube