YoVDO

Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long Sequences

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Attention Mechanisms Courses Machine Learning Courses Transformer Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 42-minute conference talk on Fast Multipole Attention (FMA), a novel attention mechanism for Transformer-based models presented by Giang Tran from the University of Waterloo. Discover how FMA uses a divide-and-conquer strategy to reduce the time and memory complexity of attention for long sequences from O(n^2) to O(n log n) or O(n), while maintaining a global receptive field. Learn about the hierarchical approach that groups queries, keys, and values into multiple levels of resolution, allowing for efficient interaction between distant tokens. Understand how this multi-level strategy, inspired by fast summation methods from n-body physics and the Fast Multipole Method, can potentially empower large language models to handle much greater sequence lengths. Examine the empirical findings comparing FMA with other efficient attention variants on medium-size datasets for autoregressive and bidirectional language modeling tasks. Gain insights into how FMA outperforms other efficient transformers in terms of memory size and accuracy, and its potential to revolutionize the processing of long sequences in natural language processing applications.

Syllabus

Giang Tran - Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long Sequences


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent