Music Generation via Masked Acoustic Token Modeling
Offered By: Simons Institute via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the cutting-edge advancements in music audio synthesis through a 58-minute lecture by Bryan Pardo from Northwestern University. Delve into the innovative combination of parallel iterative decoding and acoustic token modeling, marking a significant milestone in neural audio music generation. Discover how this approach enables faster inference compared to autoregressive methods and its suitability for tasks like infill. Learn about the model's versatile applications through token-based prompting, including the ability to guide generation with selectively masked music token sequences. Examine the potential outcomes ranging from high-quality audio compression to creating variations of original music that maintain style, genre, beat, and instrumentation while introducing novel timbres and rhythms.
Syllabus
Music Generation via Masked Acoustic Token Modeling
Taught by
Simons Institute
Related Courses
Create a web app that generates melodies using Magenta’s AICoursera Community Project Network via Coursera Generating discrete sequences: language and music
Ural Federal University via edX النماذج المتعاقبة
DeepLearning.AI via Coursera Genetic Algorithms for Generative Music AI - Lecture 15
Valerio Velardo - The Sound of AI via YouTube Transformers for Generative Music AI - Part 2: Decoder and Music Generation
Valerio Velardo - The Sound of AI via YouTube