Associative Memories as a Building Block in Transformers
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the role of associative memories in Transformer models through this 39-minute lecture by Alberto Bietti from the Flatiron Institute. Delve into the internal mechanisms of large language models and their ability to store vast amounts of knowledge from training data. Examine theoretical results on gradient-based learning of weight matrices as associative memories and the impact of over-parameterization on storage capacity. Gain insights into how Transformers adapt to new information in context or prompts through analysis of toy tasks for reasoning and factual recall. Enhance your understanding of Transformers as a computational model and their implications for reliable AI systems.
Syllabus
Associative memories as a building block in Transformers
Taught by
Simons Institute
Related Courses
Natural Language ProcessingColumbia University via Coursera Developmental Robotics
University of Naples Federico II via Federica Network Dynamics of Social Behavior
University of Pennsylvania via Coursera User-centric Computing For Human-Computer Interaction
Indian Institute of Technology Guwahati via Swayam People, Networks and Neighbours: Understanding Social Dynamics
University of Groningen via FutureLearn