YoVDO

Nyströmformer- A Nyström-Based Algorithm for Approximating Self-Attention

Offered By: Yannic Kilcher via YouTube

Tags

Transformer Models Courses Machine Learning Courses Algorithm Design Courses Self-Attention Courses

Course Description

Overview

Explore a comprehensive video explanation of the Nyströmformer algorithm, a novel approach to approximating self-attention in Transformers with linear memory and time requirements. Delve into the quadratic memory bottleneck in self-attention, the softmax operation, and the Nyström approximation method. Gain insights into the landmark method, full algorithm implementation, theoretical guarantees, and techniques for avoiding large attention matrices. Compare subsampling keys with negative sampling, and examine experimental results demonstrating the algorithm's effectiveness. Enhance your understanding of this innovative solution for processing longer sequences in natural language processing tasks.

Syllabus

- Intro & Overview
- The Quadratic Memory Bottleneck in Self-Attention
- The Softmax Operation in Attention
- Nyström-Approximation
- Getting Around the Softmax Problem
- Intuition for Landmark Method
- Full Algorithm
- Theoretical Guarantees
- Avoiding the Large Attention Matrix
- Subsampling Keys vs Negative Sampling
- Experimental Results
- Conclusion & Comments


Taught by

Yannic Kilcher

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube