Large Scale Universal Speech Generative Models
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the cutting-edge developments in large-scale universal speech generative models in this comprehensive lecture by Wei-Ning Hsu, a research scientist at Meta Foundational AI Research. Delve into the world of self-supervised learning and generative models for speech and audio, examining pioneering work such as HuBERT, AV-HuBERT, TextlessNLP, data2vec, wav2vec-U, textless speech translation, and Voicebox. Begin with an introduction to conventional neural speech generative models and understand their limitations in scaling to Internet-scale data. Compare the latest large-scale generative models for text and image to outline promising approaches for building scalable speech models. Discover Voicebox, the most versatile generative model for speech, trained on over 50K hours of multilingual speech using a flow-matching objective. Learn about its capabilities in monolingual/cross-lingual zero-shot TTS, holistic style conversion, transient noise removal, content editing, and diverse sample generation. Gain insights into the state-of-the-art performance and excellent run-time efficiency of Voicebox, and understand its potential impact on the field of speech generation and processing.
Syllabus
Large Scale Universal Speech Generative Models - Wei-Ning Hsu
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
CMU Advanced NLP: How to Use Pre-Trained ModelsGraham Neubig via YouTube Stanford Seminar 2022 - Transformer Circuits, Induction Heads, In-Context Learning
Stanford University via YouTube Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Simons Institute via YouTube In-Context Learning: A Case Study of Simple Function Classes
Simons Institute via YouTube AI Mastery: Ultimate Crash Course in Prompt Engineering for Large Language Models
Data Science Dojo via YouTube