YoVDO

Efficient Streaming Language Models with Attention Sinks - Paper Explained

Offered By: Yannic Kilcher via YouTube

Tags

Machine Learning Courses Deep Learning Courses Computational Linguistics Courses Attention Mechanisms Courses Transformer Architecture Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of Streaming Language Models with Attention Sinks in this informative video explanation. Delve into the challenges of deploying Large Language Models (LLMs) for streaming applications and long interactions. Learn about the phenomenon of attention sinks and how they can recover the performance of window attention. Discover the StreamingLLM framework, which enables LLMs to generalize to infinite sequence lengths without fine-tuning. Examine experimental evidence, explore the semantics versus position debate, and investigate whether attention sinks can be learned. Compare this approach to other models like Big Bird and gain insights into efficient language modeling with extended token capabilities.

Syllabus

- Introduction
- What is the problem?
- The hypothesis: Attention Sinks
- Experimental evidence
- Streaming LLMs
- Semantics or position?
- Can attention sinks be learned?
- More experiments
- Comparison to Big Bird


Taught by

Yannic Kilcher

Related Courses

Miracles of Human Language: An Introduction to Linguistics
Leiden University via Coursera
Language and Mind
Indian Institute of Technology Madras via Swayam
Text Analytics with Python
University of Canterbury via edX
Playing With Language
TED-Ed via YouTube
Computational Language: A New Kind of Science
World Science U