Distill Whisper Explained - Robust Knowledge Distillation for Speech Recognition
Offered By: Unify via YouTube
Course Description
Overview
Explore a comprehensive presentation on Distill Whisper, delivered by Sanchit Gandhi from Hugging Face. Delve into the intricacies of this compact yet powerful speech recognition model, which achieves robust knowledge distillation through large-scale pseudo labelling. Learn how Distill Whisper offers 5.8 times faster runtime and 51% fewer parameters while maintaining comparable accuracy to the larger Whisper model. Gain insights into the project code, research paper, and the team behind this innovative development. Discover additional resources for staying updated on AI research and industry trends, including The Deep Dive newsletter and Unify's blog. Connect with the Unify community through various platforms to further engage with cutting-edge AI technologies and discussions.
Syllabus
Distill Whisper Explained
Taught by
Unify
Related Courses
Machine Learning Capstone: An Intelligent Application with Deep LearningUniversity of Washington via Coursera Elaborazione del linguaggio naturale
University of Naples Federico II via Federica Deep Learning for Natural Language Processing
University of Oxford via Independent Deep Learning Summer School
Independent Sequence Models
DeepLearning.AI via Coursera