MusicLM Generates Music From Text - Paper Breakdown
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Explore the groundbreaking MusicLM model in this comprehensive video breakdown. Delve into the world of text-based music generation as the presenter analyzes Google's innovative approach to creating convincing short music clips with high audio fidelity. Learn about the model's architecture, including key components like SoundStream, w2v-BERT, and MuLan. Understand the training and inference processes, examine experimental results, and discuss limitations. Gain insights into the research procedure behind this cutting-edge technology that combines deep learning base models to revolutionize the Music AI community. Compare MusicLM with other text-to-music models like Riffusion and Mubert AI, and witness demonstrations of its capabilities.
Syllabus
Intro
Text-to-music
MusicLM demo
Riffusion and Mubert AI
MusicLM architecture
Components overview
SoundStream
w2v-BERT
MuLan
Training
Inference
Experiments
Limitations
Thoughts on research procedure
Taught by
Valerio Velardo - The Sound of AI
Related Courses
AudioGen- Textually Guided Audio Generation - Paper ExplainedAleksa Gordić - The AI Epiphany via YouTube A Composer's Guide to Creating with Generative Neural Networks
GOTO Conferences via YouTube 21 Recent AI Updates in 23 Minutes
1littlecoder via YouTube Popcorn & Clocks - A Story About Scheduling in the Browser
NDC Conferences via YouTube Monotron - A 1980s Style Home Computer Written in Rust
ACCU Conference via YouTube