Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention
Offered By: USENIX via YouTube
Course Description
Overview
Explore a cutting-edge approach to optimizing large language model (LLM) serving for multi-turn conversations in this 22-minute conference talk from USENIX ATC '24. Dive into the innovative CachedAttention mechanism, designed to significantly reduce computational overheads and serving costs associated with LLM interactions. Learn how this new attention mechanism enables the reuse of key-value (KV) caches across conversations, employing a hierarchical caching system and intelligent scheduling techniques. Discover strategies for efficient KV cache management, including layer-wise pre-loading, asynchronous saving, and scheduler-aware fetching and eviction schemes. Understand how CachedAttention addresses the challenge of context window overflow while maintaining the validity of saved KV caches. Examine the impressive experimental results, showcasing substantial improvements in time to first token, prompt prefilling throughput, and overall inference cost reduction for multi-turn conversations with LLMs.
Syllabus
USENIX ATC '24 - Cost-Efficient Large Language Model Serving for Multi-turn Conversations with...
Taught by
USENIX
Related Courses
NeRF - Representing Scenes as Neural Radiance Fields for View SynthesisYannic Kilcher via YouTube Perceiver - General Perception with Iterative Attention
Yannic Kilcher via YouTube LambdaNetworks- Modeling Long-Range Interactions Without Attention
Yannic Kilcher via YouTube Attention Is All You Need - Transformer Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube NeRFs- Neural Radiance Fields - Paper Explained
Aladdin Persson via YouTube