Transformers for Generative Music AI - Part 2: Decoder and Music Generation
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Dive deep into the world of transformers and their application in generative music AI in this comprehensive video lecture. Explore the intuition, theory, and mathematics behind transformers, focusing on the decoder component and its various sublayers, including masked multi-head attention. Learn how to leverage transformers for music generation, with practical tips and tricks from industry experience. Discover the importance of music representation and data in the generation process, and gain insights into future research directions in neuro-symbolic integration for more robust music generation. Access accompanying lecture slides and join a community discussion to further enhance your understanding of this cutting-edge technology in AI-driven music creation.
Syllabus
Intro
Decoder intuition
Decoder input
Decoder block
Training / inference discrepancy
Masked multi-head attention
Add & norm
Multi-head attention
Feedforward
Decoder block
Linear & softmax
Decoder step-by-step
Training a transformer
Music generation with transformers
Valerio's music generation transformer routine
Music data is key
Pros and cons
Most promising research
Key takeaways
What's up next?
Taught by
Valerio Velardo - The Sound of AI
Related Courses
النماذج المتعاقبةDeepLearning.AI via Coursera Create a web app that generates melodies using Magenta’s AI
Coursera Community Project Network via Coursera Generating discrete sequences: language and music
Ural Federal University via edX Sequence Modeling with Neural Networks
Alexander Amini via YouTube Training a Long-Short Term Memory Network for Melody Generation
Valerio Velardo - The Sound of AI via YouTube