Transformer Encoder in 100 Lines of Code
Offered By: CodeEmporium via YouTube
Course Description
Overview
Syllabus
What we will cover
Introducing Colab
Word Embeddings and d_model
What are Attention heads?
What is Dropout?
Why batch data?
How to sentences into the transformer?
Why feed forward layers in transformer?
Why Repeating Encoder layers?
The “Encoder” Class, nn.Module, nn.Sequential
The “EncoderLayer” Class
What is Attention: Query, Key, Value vectors
What is Attention: Matrix Transpose in PyTorch
What is Attention: Scaling
What is Attention: Masking
What is Attention: Softmax
What is Attention: Value Tensors
CRUX OF VIDEO: “MultiHeadAttention” Class
Returning the flow back to “EncoderLayer” Class
Layer Normalization
Returning the flow back to “EncoderLayer” Class
Feed Forward Layers
Why Activation Functions?
Finish the Flow of Encoder
Conclusion & Decoder for next video
Taught by
CodeEmporium
Related Courses
Building Batch Data Pipelines on GCP en EspañolGoogle Cloud via Coursera Building Batch Data Pipelines on GCP auf Deutsch
Google Cloud via Coursera Building Batch Data Pipelines on GCP en Français
Google Cloud via Coursera Building Batch Data Processing Solutions in Microsoft Azure
Pluralsight Building Batch Data Processing Solutions in Microsoft Azure
Pluralsight