Music Generation via Masked Acoustic Token Modeling
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the cutting-edge advancements in music audio synthesis through a 58-minute lecture by Bryan Pardo from Northwestern University. Delve into the innovative combination of parallel iterative decoding and acoustic token modeling, marking a significant milestone in neural audio music generation. Discover how this approach enables faster inference compared to autoregressive methods and its suitability for tasks like infill. Learn about the model's versatile applications through token-based prompting, including the ability to guide generation with selectively masked music token sequences. Examine the potential outcomes ranging from high-quality audio compression to creating variations of original music that maintain style, genre, beat, and instrumentation while introducing novel timbres and rhythms.
Syllabus
Music Generation via Masked Acoustic Token Modeling
Taught by
Simons Institute
Related Courses
النماذج المتعاقبةDeepLearning.AI via Coursera Create a web app that generates melodies using Magenta’s AI
Coursera Community Project Network via Coursera Generating discrete sequences: language and music
Ural Federal University via edX Sequence Modeling with Neural Networks
Alexander Amini via YouTube Training a Long-Short Term Memory Network for Melody Generation
Valerio Velardo - The Sound of AI via YouTube