Passive Learning of Causal Strategies in Language Models
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the surprising capabilities of passive learning in understanding causality and experimentation in this 56-minute talk by Andrew Lampinen from Google DeepMind. Delve into the distinction between observational and passive learning, and discover how language models can acquire causal strategies through passive imitation of expert interventional data. Examine empirical evidence showing how agents can apply these strategies to uncover novel causal structures, even in complex environments with high-dimensional observations. Learn about the role of natural language explanations in enhancing generalization, including out-of-distribution scenarios with confounded training data. Investigate how language models, trained solely on next-word prediction, can extrapolate causal intervention strategies from few-shot prompts. Reflect on the implications of these findings for understanding language model behaviors and capabilities, and consider open questions regarding AI's use of explanations in a more human-like manner.
Syllabus
What can be passively learned about causality?
Taught by
Simons Institute
Related Courses
Stanford Seminar - Enabling NLP, Machine Learning, and Few-Shot Learning Using Associative ProcessingStanford University via YouTube GUI-Based Few Shot Classification Model Trainer - Demo
James Briggs via YouTube HyperTransformer - Model Generation for Supervised and Semi-Supervised Few-Shot Learning
Yannic Kilcher via YouTube GPT-3 - Language Models Are Few-Shot Learners
Yannic Kilcher via YouTube IMAML- Meta-Learning with Implicit Gradients
Yannic Kilcher via YouTube