YoVDO

Emergence and Grokking in Simple Neural Architectures

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Machine Learning Courses Neural Networks Courses Transformers Courses Modular Arithmetic Courses Multilayer Perceptrons Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking lecture on the emergence and grokking phenomena in simple neural network architectures. Delve into Misha Belkin's presentation at IPAM's Theory and Practice of Deep Learning Workshop, where he argues that Multi-Layer Perceptrons (MLPs) exhibit remarkable behaviors similar to those observed in modern Large Language Models. Examine the challenges in understanding how 2-layer MLPs learn relatively simple problems like "grokking" modular arithmetic. Discover recent progress in the field and learn about Recursive Feature Machines as a potential model for analyzing emergent phenomena in neural architectures. Gain valuable insights into the computational aspects of modern neural networks and their implications for deep learning theory and practice.

Syllabus

Misha Belkin - Emergence and grokking in "simple" architectures - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent