YoVDO

Language Models as World Models? - Understanding Representations and Semantic Control

Offered By: Simons Institute via YouTube

Tags

Language Models Courses Transformer Models Courses Representation Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intriguing question of whether language models can truly represent the world described in text in this thought-provoking talk by Jacob Andreas from MIT. Delve into recent research examining how transformer language models encode interpretable and controllable representations of facts and situations. Discover evidence from probing experiments suggesting that language model representations contain rudimentary information about entity properties and dynamic states, and how these representations influence downstream language generation. Examine the limitations of even the largest language models, including their tendency to hallucinate facts and contradict input text. Learn about the "representation editing" model REMEDI, designed to correct semantic errors by intervening in language model activations. Consider recent experiments that reveal the complexity of accessing and manipulating language models' "knowledge" through simple probes. Gain insights into the ongoing challenges in building transparent and controllable world models for language generation systems.

Syllabus

Language Models as World Models?


Taught by

Simons Institute

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube