YoVDO

Distill Whisper Explained - Robust Knowledge Distillation for Speech Recognition

Offered By: Unify via YouTube

Tags

Speech Recognition Courses Artificial Intelligence Courses Machine Learning Courses Deep Learning Courses Model Compression Courses Hugging Face Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive presentation on Distill Whisper, delivered by Sanchit Gandhi from Hugging Face. Delve into the intricacies of this compact yet powerful speech recognition model, which achieves robust knowledge distillation through large-scale pseudo labelling. Learn how Distill Whisper offers 5.8 times faster runtime and 51% fewer parameters while maintaining comparable accuracy to the larger Whisper model. Gain insights into the project code, research paper, and the team behind this innovative development. Discover additional resources for staying updated on AI research and industry trends, including The Deep Dive newsletter and Unify's blog. Connect with the Unify community through various platforms to further engage with cutting-edge AI technologies and discussions.

Syllabus

Distill Whisper Explained


Taught by

Unify

Related Courses

Hugging Face on Azure - Partnership and Solutions Announcement
Microsoft via YouTube
Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube
Open Source Platforms for MLOps
Duke University via Coursera
Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube
Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube