Self - Cross, Hard - Soft Attention and the Transformer
Offered By: Alfredo Canziani via YouTube
Course Description
Overview
Explore the intricacies of attention mechanisms and Transformer architecture in this comprehensive lecture. Delve into self-attention, cross-attention, hard attention, and soft attention concepts. Learn about set encoding use cases and the key-value store paradigm. Understand the implementation of queries, keys, and values in both self-attention and cross-attention contexts. Examine the Transformer's encoder-predictor-decoder architecture, with a focus on the encoder and the unique "decoder" module. Gain practical insights through a PyTorch implementation of a Transformer encoder using Jupyter Notebook. Additionally, discover useful tips for reading and summarizing research papers collaboratively.
Syllabus
– Welcome to class
– Listening to YouTube from the terminal
– Summarising papers with @Notion
– Reading papers collaboratively
– Attention! Self / cross, hard / soft
– Use cases: set encoding!
– Self-attention
– Key-value store
– Queries, keys, and values → self-attention
– Queries, keys, and values → cross-attention
– Implementation details
– The Transformer: an encoder-predictor-decoder architecture
– The Transformer encoder
– The Transformer “decoder” which is an encoder-predictor-decoder module
– Jupyter Notebook and PyTorch implementation of a Transformer encoder
– Goodbye :
Taught by
Alfredo Canziani
Tags
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX