Transformers Explained - Part 1: Generative Music AI
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Dive into a comprehensive video lecture on transformer architectures, focusing on their application in generative music AI. Explore the intuition, theory, and mathematical formalization behind transformers, which have become dominant in deep learning across various fields. Gain insights into the encoder structure, self-attention mechanisms, multi-head attention, positional encoding, and feedforward layers. Follow along with step-by-step explanations of each component, including visual recaps and key takeaways. Enhance your understanding of this powerful deep learning architecture and its potential in audio and music processing.
Syllabus
Intro
Context
The intuition
Encoder
Encoder block
Self-attention
Matrices
Input matrix
Query, key, value matrices
Self-attention formula
Self-attention: Step 1
Self-attention: Step 2
Self-attention: Step 3
Self-attention: Step 4
Self-attention: Visual recap
Multi-head attention
The propblem of sequence order
Positional encoding
How to compute positional encoding
Feedforward layer
Add & norm layer
Deeper meaning of encoder components
Encoder step-by-step
Key takeaways
What next?
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Introduction to Digital Sound DesignEmory University via Coursera Foundations of Wavelets and Multirate Digital Signal Processing
Indian Institute of Technology Bombay via Swayam iOS Development for Creative Entrepreneurs
University of California, Irvine via Coursera Deploying TinyML
Harvard University via edX Digital Signal Processing
École Polytechnique Fédérale de Lausanne via Coursera