YoVDO

StreamingLLM: Deploying Language Models for Streaming Applications with Long Text Sequences

Offered By: MIT HAN Lab via YouTube

Tags

Language Models Courses Transformer Architecture Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the innovative StreamingLLM technique for deploying language models in streaming applications with long text sequences and limited memory. Discover the "attention sink" phenomenon and learn how it can be leveraged to process infinite text lengths without fine-tuning. Understand the challenges of existing window-based KV cache methods and the suboptimal eviction policies they employ. Gain insights into a novel approach that maintains attention sinks in the KV cache while utilizing a sliding window mechanism for the remaining tokens. Access the implementation code on GitHub to further investigate this cutting-edge solution for efficient language model deployment.

Syllabus

StreamingLLM Lecture


Taught by

MIT HAN Lab

Related Courses

Artificial Intelligence Foundations: Neural Networks
LinkedIn Learning
Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
BERTによる自然言語処理を学ぼう! -Attention、TransformerからBERTへとつながるNLP技術-
Udemy
Complete Natural Language Processing Tutorial in Python
Keith Galli via YouTube