YoVDO

How LLMs Might Store Facts - Multilayer Perceptrons in Transformers

Offered By: 3Blue1Brown via YouTube

Tags

Deep Learning Courses Neural Networks Courses Transformers Courses Superposition Courses Multilayer Perceptrons Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the inner workings of Large Language Models (LLMs) in this 23-minute video from 3Blue1Brown. Unpack the multilayer perceptrons in a transformer and discover how they may store facts. Dive into a quick refresher on transformers, examine assumptions for a toy example, and delve inside a multilayer perceptron. Learn about parameter counting and the concept of superposition in neural networks. Gain insights from AI alignment research and mechanistic interpretability studies, with links to additional resources for further exploration. Perfect for those interested in understanding the technical aspects of how LLMs process and store information.

Syllabus

- Where facts in LLMs live
- Quick refresher on transformers
- Assumptions for our toy example
- Inside a multilayer perceptron
- Counting parameters
- Superposition
- Up next


Taught by

3Blue1Brown

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX