YoVDO

When Calibration Goes Awry: Hallucination in Language Models

Offered By: Simons Institute via YouTube

Tags

Language Models Courses Artificial Intelligence Courses Machine Learning Courses Generalization Courses OpenAI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the phenomenon of hallucinations in language models through this insightful lecture by Adam Kalai from OpenAI. Delve into how calibration, a process naturally encouraged during pre-training, can lead to unexpected hallucinations. Examine the relationship between hallucination rates and domains using the Good-Turing estimator, with a particular focus on notorious sources like paper titles. Gain valuable insights into potential methods for mitigating hallucinations in AI language models. This hour-long talk, part of the Emerging Generalization Settings series at the Simons Institute, presents joint research with Santosh Vempala conducted while Kalai was at Microsoft Research New England.

Syllabus

When calibration goes awry: hallucination in language models


Taught by

Simons Institute

Related Courses

Launching into Machine Learning 日本語版
Google Cloud via Coursera
Launching into Machine Learning auf Deutsch
Google Cloud via Coursera
Launching into Machine Learning en Français
Google Cloud via Coursera
Launching into Machine Learning en Español
Google Cloud via Coursera
Основы машинного обучения
Higher School of Economics via Coursera