YoVDO

When Calibration Goes Awry: Hallucination in Language Models

Offered By: Simons Institute via YouTube

Tags

Language Models Courses Artificial Intelligence Courses Machine Learning Courses Generalization Courses OpenAI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the phenomenon of hallucinations in language models through this insightful lecture by Adam Kalai from OpenAI. Delve into how calibration, a process naturally encouraged during pre-training, can lead to unexpected hallucinations. Examine the relationship between hallucination rates and domains using the Good-Turing estimator, with a particular focus on notorious sources like paper titles. Gain valuable insights into potential methods for mitigating hallucinations in AI language models. This hour-long talk, part of the Emerging Generalization Settings series at the Simons Institute, presents joint research with Santosh Vempala conducted while Kalai was at Microsoft Research New England.

Syllabus

When calibration goes awry: hallucination in language models


Taught by

Simons Institute

Related Courses

Building Document Intelligence Applications with Azure Applied AI and Azure Cognitive Services
Microsoft via YouTube
Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube
AI Show - Ignite Recap: Arc-Enabled ML, Language Services, and OpenAI
Microsoft via YouTube
Building Intelligent Applications with World-Class AI
Microsoft via YouTube
Build an AI Image Generator with OpenAI & Node.js
Traversy Media via YouTube