Distill Whisper Explained - Robust Knowledge Distillation for Speech Recognition
Offered By: Unify via YouTube
Course Description
Overview
Explore a comprehensive presentation on Distill Whisper, delivered by Sanchit Gandhi from Hugging Face. Delve into the intricacies of this compact yet powerful speech recognition model, which achieves robust knowledge distillation through large-scale pseudo labelling. Learn how Distill Whisper offers 5.8 times faster runtime and 51% fewer parameters while maintaining comparable accuracy to the larger Whisper model. Gain insights into the project code, research paper, and the team behind this innovative development. Discover additional resources for staying updated on AI research and industry trends, including The Deep Dive newsletter and Unify's blog. Connect with the Unify community through various platforms to further engage with cutting-edge AI technologies and discussions.
Syllabus
Distill Whisper Explained
Taught by
Unify
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX