Lies, Damned Lies, and Large Language Models - Measuring and Reducing Hallucinations
Offered By: PyCon US via YouTube
Course Description
Overview
Explore the challenges and solutions surrounding large language models' (LLMs) tendency to produce incorrect information in this 29-minute PyCon US talk. Discover methods to measure and compare hallucination rates among different models, focusing on misinformation regurgitation from training data. Learn to utilize Python tools like Hugging Face's datasets and transformers packages, as well as the langchain package, to assess hallucinations using the TruthfulQA dataset. Gain insights into recent initiatives aimed at reducing LLM hallucinations, including retrieval augmented generation (RAG) techniques, and understand how these approaches can enhance the reliability and usability of LLMs across various contexts.
Syllabus
Talks - Jodie Burchell: Lies, damned lies and large language models
Taught by
PyCon US
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube