YoVDO

Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech

Offered By: Linux Foundation via YouTube

Tags

Speech Synthesis Courses Deep Learning Courses Speech Recognition Courses Text to Speech Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions encountered when scaling WhisperSpeech models to over 80,000 hours of speech in this informative conference talk. Discover the importance of small-scale experiments, maximizing GPU utilization, and transitioning from single to multi-GPU training. Learn about the significant performance improvements achieved through WebDataset implementation and strategies for effortlessly scaling AI models. Gain insights into GPU procurement options and understand the differences between consumer and professional-grade GPUs. Delve into the process of creating high-quality, open-source text-to-speech models based on cutting-edge research from major AI labs, and uncover the lessons learned in developing state-of-the-art speech synthesis capabilities.

Syllabus

Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech - Jakub Cłapa, Collabora


Taught by

Linux Foundation

Tags

Related Courses

Elaborazione del linguaggio naturale
University of Naples Federico II via Federica
Microsoft Bot Framework and Conversation as a Platform
Microsoft via edX
Natural Language Processing in Microsoft Azure
Microsoft via Coursera
Chatbot with Mic Input-Speaker Output Using Python, Jarvis, and DialoGPT
YouTube
Introduction to Amazon Polly
Pluralsight