Transformers for Generative Music AI - Part 2: Decoder and Music Generation
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Dive deep into the world of transformers and their application in generative music AI in this comprehensive video lecture. Explore the intuition, theory, and mathematics behind transformers, focusing on the decoder component and its various sublayers, including masked multi-head attention. Learn how to leverage transformers for music generation, with practical tips and tricks from industry experience. Discover the importance of music representation and data in the generation process, and gain insights into future research directions in neuro-symbolic integration for more robust music generation. Access accompanying lecture slides and join a community discussion to further enhance your understanding of this cutting-edge technology in AI-driven music creation.
Syllabus
Intro
Decoder intuition
Decoder input
Decoder block
Training / inference discrepancy
Masked multi-head attention
Add & norm
Multi-head attention
Feedforward
Decoder block
Linear & softmax
Decoder step-by-step
Training a transformer
Music generation with transformers
Valerio's music generation transformer routine
Music data is key
Pros and cons
Most promising research
Key takeaways
What's up next?
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX