YoVDO

Attention Is All You Need - Transformer Paper Explained

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

Machine Learning Courses Deep Learning Courses Transformer Models Courses Embeddings Courses Positional Encoding Courses

Course Description

Overview

Dive into a comprehensive video explanation of the groundbreaking "Attention Is All You Need" paper, which introduced the Transformer model. Learn the inner workings of the original Transformer through a detailed walkthrough using a simple machine translation example from English to German. Explore key concepts including tokenization, embeddings, positional encodings, encoder preprocessing, multi-head attention mechanisms, pointwise networks, causal masking, source attending, vocabulary space projection, loss functions, and decoding. Gain a deep understanding of this influential architecture that has revolutionized natural language processing and beyond.

Syllabus

A high-level overview
tokenization
embeddings and positional encodings
encoder preprocessing splitting into subspaces
single MHA head explanation
pointwise network
causal masking MHA
source attending MHA
projecting into vocab space and loss function
decoding


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

TensorFlow on Google Cloud
Google Cloud via Coursera
Art and Science of Machine Learning 日本語版
Google Cloud via Coursera
Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera
Art and Science of Machine Learning em Português Brasileiro
Google Cloud via Coursera
Art and Science of Machine Learning en Español
Google Cloud via Coursera