In-Context Learning: A Case Study of Simple Function Classes
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the concept of in-context learning through a comprehensive lecture by Gregory Valiant from Stanford University. Delve into empirical efforts that illuminate fundamental aspects of this learning approach, which occurs at inference time without model parameter updates. Examine the efficiency of training Transformers and LSTMs to in-context learn basic function classes like linear models, sparse linear models, and small decision trees. Discover methods for evaluating in-context learning algorithms and understand the qualitative differences between various architectures in their ability to perform this type of learning. Investigate recent research findings on the connections between language modeling and learning, including whether good language models must possess in-context learning capabilities and if large language models can perform regression. Consider the potential applications of these primitives in language-centric tasks. Based primarily on collaborative work with Shivam Garg, Dimitris Tsipras, and Percy Liang, this talk provides valuable insights into the evolving field of in-context learning and its implications for AI and machine learning.
Syllabus
In-Context Learning: A Case Study of Simple Function Classes
Taught by
Simons Institute
Related Courses
CMU Advanced NLP: How to Use Pre-Trained ModelsGraham Neubig via YouTube Stanford Seminar 2022 - Transformer Circuits, Induction Heads, In-Context Learning
Stanford University via YouTube Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Simons Institute via YouTube AI Mastery: Ultimate Crash Course in Prompt Engineering for Large Language Models
Data Science Dojo via YouTube New Summarization Techniques for LLM Applications - Building a Note-Taking App
Sam Witteveen via YouTube