Two Stories in Mechanistic Interpretation of Natural and Artificial Neural Computation
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a 55-minute conference talk by Cengiz Pehlevan from Harvard University, presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into two stories of mechanistic interpretation in natural and artificial neural computation. Examine the remarkable ability of Transformers to perform in-context learning (ICL) without explicit prior training. Investigate an exactly solvable model of ICL for linear regression tasks using linear attention, uncovering sharp asymptotics for the learning curve in a scaling regime with infinite token dimension. Discover a double-descent learning curve and a phase transition between low and high task diversity regimes, revealing insights into memorization versus genuine in-context learning and generalization. Validate theoretical findings through experiments with both linear attention and full nonlinear Transformer architectures.
Syllabus
Cengiz Pehlevan - 2 stories in mechanistic interpretation of natural & artificial neural computation
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent