Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the challenges and solutions encountered when scaling WhisperSpeech models to over 80,000 hours of speech in this informative conference talk. Discover the importance of small-scale experiments, maximizing GPU utilization, and transitioning from single to multi-GPU training. Learn about the significant performance improvements achieved through WebDataset implementation and strategies for effortlessly scaling AI models. Gain insights into GPU procurement options and understand the differences between consumer and professional-grade GPUs. Delve into the process of creating high-quality, open-source text-to-speech models based on cutting-edge research from major AI labs, and uncover the lessons learned in developing state-of-the-art speech synthesis capabilities.
Syllabus
Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech - Jakub Cłapa, Collabora
Taught by
Linux Foundation
Tags
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX