YoVDO

Self - Cross, Hard - Soft Attention and the Transformer

Offered By: Alfredo Canziani via YouTube

Tags

Transformer Architecture Courses Deep Learning Courses Neural Networks Courses PyTorch Courses Jupyter Notebooks Courses Self-Attention Mechanisms Courses

Course Description

Overview

Explore the intricacies of attention mechanisms and Transformer architecture in this comprehensive lecture. Delve into self-attention, cross-attention, hard attention, and soft attention concepts. Learn about set encoding use cases and the key-value store paradigm. Understand the implementation of queries, keys, and values in both self-attention and cross-attention contexts. Examine the Transformer's encoder-predictor-decoder architecture, with a focus on the encoder and the unique "decoder" module. Gain practical insights through a PyTorch implementation of a Transformer encoder using Jupyter Notebook. Additionally, discover useful tips for reading and summarizing research papers collaboratively.

Syllabus

– Welcome to class
– Listening to YouTube from the terminal
– Summarising papers with @Notion
– Reading papers collaboratively
– Attention! Self / cross, hard / soft
– Use cases: set encoding!
– Self-attention
– Key-value store
– Queries, keys, and values → self-attention
– Queries, keys, and values → cross-attention
– Implementation details
– The Transformer: an encoder-predictor-decoder architecture
– The Transformer encoder
– The Transformer “decoder” which is an encoder-predictor-decoder module
– Jupyter Notebook and PyTorch implementation of a Transformer encoder
– Goodbye :


Taught by

Alfredo Canziani

Tags

Related Courses

Deep Learning with Python and PyTorch.
IBM via edX
Introduction to Machine Learning
Duke University via Coursera
How Google does Machine Learning em Português Brasileiro
Google Cloud via Coursera
Intro to Deep Learning with PyTorch
Facebook via Udacity
Secure and Private AI
Facebook via Udacity