In-Context Learning: A Case Study of Simple Function Classes
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the concept of in-context learning through a comprehensive lecture by Gregory Valiant from Stanford University. Delve into empirical efforts that illuminate fundamental aspects of this learning approach, which occurs at inference time without model parameter updates. Examine the efficiency of training Transformers and LSTMs to in-context learn basic function classes like linear models, sparse linear models, and small decision trees. Discover methods for evaluating in-context learning algorithms and understand the qualitative differences between various architectures in their ability to perform this type of learning. Investigate recent research findings on the connections between language modeling and learning, including whether good language models must possess in-context learning capabilities and if large language models can perform regression. Consider the potential applications of these primitives in language-centric tasks. Based primarily on collaborative work with Shivam Garg, Dimitris Tsipras, and Percy Liang, this talk provides valuable insights into the evolving field of in-context learning and its implications for AI and machine learning.
Syllabus
In-Context Learning: A Case Study of Simple Function Classes
Taught by
Simons Institute
Related Courses
Introduction to Linear Models and Matrix AlgebraHarvard University via edX Case Studies in Functional Genomics
Harvard University via edX Introdução ao Marketing Analítico
Insper via Coursera Fundamentals of Quantitative Modeling
University of Pennsylvania via Coursera Обучение на размеченных данных
Moscow Institute of Physics and Technology via Coursera