Transformers for Generative Music AI - Part 2: Decoder and Music Generation
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Dive deep into the world of transformers and their application in generative music AI in this comprehensive video lecture. Explore the intuition, theory, and mathematics behind transformers, focusing on the decoder component and its various sublayers, including masked multi-head attention. Learn how to leverage transformers for music generation, with practical tips and tricks from industry experience. Discover the importance of music representation and data in the generation process, and gain insights into future research directions in neuro-symbolic integration for more robust music generation. Access accompanying lecture slides and join a community discussion to further enhance your understanding of this cutting-edge technology in AI-driven music creation.
Syllabus
Intro
Decoder intuition
Decoder input
Decoder block
Training / inference discrepancy
Masked multi-head attention
Add & norm
Multi-head attention
Feedforward
Decoder block
Linear & softmax
Decoder step-by-step
Training a transformer
Music generation with transformers
Valerio's music generation transformer routine
Music data is key
Pros and cons
Most promising research
Key takeaways
What's up next?
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX