Efficient Streaming Language Models with Attention Sinks - Paper Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore the concept of Streaming Language Models with Attention Sinks in this informative video explanation. Delve into the challenges of deploying Large Language Models (LLMs) for streaming applications and long interactions. Learn about the phenomenon of attention sinks and how they can recover the performance of window attention. Discover the StreamingLLM framework, which enables LLMs to generalize to infinite sequence lengths without fine-tuning. Examine experimental evidence, explore the semantics versus position debate, and investigate whether attention sinks can be learned. Compare this approach to other models like Big Bird and gain insights into efficient language modeling with extended token capabilities.
Syllabus
- Introduction
- What is the problem?
- The hypothesis: Attention Sinks
- Experimental evidence
- Streaming LLMs
- Semantics or position?
- Can attention sinks be learned?
- More experiments
- Comparison to Big Bird
Taught by
Yannic Kilcher
Related Courses
Miracles of Human Language: An Introduction to LinguisticsLeiden University via Coursera Language and Mind
Indian Institute of Technology Madras via Swayam Text Analytics with Python
University of Canterbury via edX Playing With Language
TED-Ed via YouTube Computational Language: A New Kind of Science
World Science U