Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the challenges and solutions encountered when scaling WhisperSpeech models to over 80,000 hours of speech in this informative conference talk. Discover the importance of small-scale experiments, maximizing GPU utilization, and transitioning from single to multi-GPU training. Learn about the significant performance improvements achieved through WebDataset implementation and strategies for effortlessly scaling AI models. Gain insights into GPU procurement options and understand the differences between consumer and professional-grade GPUs. Delve into the process of creating high-quality, open-source text-to-speech models based on cutting-edge research from major AI labs, and uncover the lessons learned in developing state-of-the-art speech synthesis capabilities.
Syllabus
Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech - Jakub Cłapa, Collabora
Taught by
Linux Foundation
Tags
Related Courses
Building AI Applications with Watson APIsIBM via Coursera Microsoft Cognitive Services: Azure Custom Text to Speech
Pluralsight Getting Started with Xamarin.Essentials in Xamarin.Forms
Pluralsight Learning Microsoft Cognitive Services for Developers
LinkedIn Learning Microsoft Cognitive Services for Developers: 2 Speech
LinkedIn Learning