Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the fascinating phenomenon of in-context learning (ICL) in pretrained transformers through this insightful lecture by Surya Ganguli from Stanford University. Delve into the fundamental question of whether ICL can solve tasks significantly different from those encountered during pretraining. Examine the performance of ICL on linear regression while varying the diversity of tasks in the pretraining dataset. Discover the existence of a task diversity threshold for the emergence of ICL and its implications. Learn how transformers behave like Bayesian estimators below this threshold and outperform them beyond it, aligning with ridge regression. Understand the critical role of task diversity in enabling transformers to solve new tasks in-context, deviating from the Bayes optimal estimator. Gain valuable insights into the interplay between task diversity, data scale, and model scale in the emergence of ICL capabilities.
Syllabus
Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Taught by
Simons Institute
Related Courses
Utilisez des modèles supervisés non linéairesCentraleSupélec via OpenClassrooms Supervised Machine Learning: Regression
IBM via Coursera General Linear Models - Regression
statisticsmatt via YouTube Machine Learning
YouTube Regularization Part 1 - Ridge (L2) Regression
StatQuest with Josh Starmer via YouTube