YoVDO

Improving Generalization by Self-Training & Self Distillation

Offered By: MITCBMM via YouTube

Tags

Deep Learning Courses Supervised Learning Courses Neural Networks Courses Hilbert Spaces Courses

Course Description

Overview

Explore the concept of self-training and self-distillation in machine learning through this 44-minute lecture by Hossein Mobahi from Google Research. Delve into the surprising phenomenon where retraining models using their own predictions can lead to improved generalization performance. Examine the regularization effects induced by this process and their amplification through multiple rounds of retraining. Investigate the rigorous characterization of these effects in Hilbert space learning, and its relation to infinite-width neural networks. Cover topics such as unconstrained form, closed-form solutions, power iteration analogy, capacity control, and generalization guarantees. Analyze deep learning experiments and discuss open problems in the field of self-training and self-distillation.

Syllabus

Intro
Main Reference
Self-Training
Self-Distillation [Deep Learning]
Self-Distillation More Profound
Learning Functions in Hilbert Space
Unconstrained Form
Intuition
Closed Form Solution
Connections
Challenges
Power Iteration Analogy
Capacity Control
Generalization Guarantees
Revisiting Illustrative Example
Advantage of Near Interpolation
Early Stopping
Deep Learning Experiments
Open Problems


Taught by

MITCBMM

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX