YoVDO

Algorithms and Hardness for Attention and Kernel Density Estimation

Offered By: Google TechTalks via YouTube

Tags

Algorithms Courses Machine Learning Courses Computational Complexity Courses Computational Physics Courses Transformer Models Courses Polynomial Method Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the computational challenges and algorithmic solutions for Attention in Large Language Models and Kernel Density Estimation in this Google TechTalk presented by Josh Alman. Dive into the quadratic-time algorithms for both problems and discover recent advancements in achieving almost linear-time performance through techniques like the Fast Multipole Method and the polynomial method. Examine fine-grained hardness results that demonstrate the optimality of these approaches in specific parameter regimes, while identifying potential areas for improvement. Learn about the speaker's collaborative research efforts and gain insights into the intersection of algorithm design, complexity theory, and algebraic tools for enhancing algorithmic efficiency.

Syllabus

Algorithms and Hardness for Attention and Kernel Density Estimation


Taught by

Google TechTalks

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube