Transformers for Generative Music AI - Part 2: Decoder and Music Generation
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Dive deep into the world of transformers and their application in generative music AI in this comprehensive video lecture. Explore the intuition, theory, and mathematics behind transformers, focusing on the decoder component and its various sublayers, including masked multi-head attention. Learn how to leverage transformers for music generation, with practical tips and tricks from industry experience. Discover the importance of music representation and data in the generation process, and gain insights into future research directions in neuro-symbolic integration for more robust music generation. Access accompanying lecture slides and join a community discussion to further enhance your understanding of this cutting-edge technology in AI-driven music creation.
Syllabus
Intro
Decoder intuition
Decoder input
Decoder block
Training / inference discrepancy
Masked multi-head attention
Add & norm
Multi-head attention
Feedforward
Decoder block
Linear & softmax
Decoder step-by-step
Training a transformer
Music generation with transformers
Valerio's music generation transformer routine
Music data is key
Pros and cons
Most promising research
Key takeaways
What's up next?
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Building and Managing Superior SkillsState University of New York via Coursera ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique Digital Skills: Artificial Intelligence
Accenture via FutureLearn AI Foundations for Everyone
IBM via Coursera Design a Feminist Chatbot
Institute of Coding via FutureLearn