YoVDO

SparQ Attention: Bandwidth-Efficient LLM Inference

Offered By: Unify via YouTube

Tags

Attention Mechanisms Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive presentation on SparQ Attention, delivered by Ivan Chelombiev and Luka Ribar from GraphCore. Delve into their groundbreaking work on increasing inference throughput of Large Language Models (LLMs) by reducing memory bandwidth requirements in attention blocks. Learn about the innovative technique of selective fetching of cached history, which can be applied to existing LLMs during inference without modifying pre-training or requiring additional fine-tuning. Discover how SparQ Attention can decrease attention memory bandwidth requirements up to eight times while maintaining accuracy, as demonstrated through evaluations of Llama 2 and Pythia models on various downstream tasks. Gain insights into the latest advancements in AI optimization and LLM efficiency, and understand the potential impact of this research on the future of language model deployment and performance.

Syllabus

We're very excited to welcome both Ivan Chelombiev and Luka Ribar from GraphCore. They will be presenting their work on SparQ Attention presentation starts at


Taught by

Unify

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam