YoVDO

MusicLM Generates Music From Text - Paper Breakdown

Offered By: Valerio Velardo - The Sound of AI via YouTube

Tags

Audio generation Courses Deep Learning Courses

Course Description

Overview

Explore the groundbreaking MusicLM model in this comprehensive video breakdown. Delve into the world of text-based music generation as the presenter analyzes Google's innovative approach to creating convincing short music clips with high audio fidelity. Learn about the model's architecture, including key components like SoundStream, w2v-BERT, and MuLan. Understand the training and inference processes, examine experimental results, and discuss limitations. Gain insights into the research procedure behind this cutting-edge technology that combines deep learning base models to revolutionize the Music AI community. Compare MusicLM with other text-to-music models like Riffusion and Mubert AI, and witness demonstrations of its capabilities.

Syllabus

Intro
Text-to-music
MusicLM demo
Riffusion and Mubert AI
MusicLM architecture
Components overview
SoundStream
w2v-BERT
MuLan
Training
Inference
Experiments
Limitations
Thoughts on research procedure


Taught by

Valerio Velardo - The Sound of AI

Related Courses

AWS Certified Machine Learning - Specialty (LA)
A Cloud Guru
Google Cloud AI Services Deep Dive
A Cloud Guru
Introduction to Machine Learning
A Cloud Guru
Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn
Advanced Artificial Intelligence on Microsoft Azure: Deep Learning, Reinforcement Learning and Applied AI
Cloudswyft via FutureLearn