Computational Benefits and Limitations of Transformers and State-Space Models
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the computational advantages and constraints of Transformers and state-space models in this 51-minute lecture by Eran Malach from the Kempner Institute at Harvard University. Delve into the mechanisms enabling retrieval, copying, and length generalization in language models, and examine how network architecture choices impact model performance on fundamental tasks. Discover theoretical and empirical evidence highlighting Transformers' superiority in copying and retrieval tasks compared to LSTM and state-space models like Mamba. Learn how Transformers' ability to copy long sequences can be harnessed for length generalization across various algorithmic and arithmetic tasks, providing valuable insights into the capabilities and limitations of different sequence modeling architectures.
Syllabus
Computational Benefits and Limitations of Transformers and State-Space Models
Taught by
Simons Institute
Related Courses
Natural Language ProcessingColumbia University via Coursera Developmental Robotics
University of Naples Federico II via Federica Network Dynamics of Social Behavior
University of Pennsylvania via Coursera User-centric Computing For Human-Computer Interaction
Indian Institute of Technology Guwahati via Swayam People, Networks and Neighbours: Understanding Social Dynamics
University of Groningen via FutureLearn