YoVDO

Improving Generalization by Self-Training & Self Distillation

Offered By: MITCBMM via YouTube

Tags

Deep Learning Courses Supervised Learning Courses Neural Networks Courses Hilbert Spaces Courses

Course Description

Overview

Explore the concept of self-training and self-distillation in machine learning through this 44-minute lecture by Hossein Mobahi from Google Research. Delve into the surprising phenomenon where retraining models using their own predictions can lead to improved generalization performance. Examine the regularization effects induced by this process and their amplification through multiple rounds of retraining. Investigate the rigorous characterization of these effects in Hilbert space learning, and its relation to infinite-width neural networks. Cover topics such as unconstrained form, closed-form solutions, power iteration analogy, capacity control, and generalization guarantees. Analyze deep learning experiments and discuss open problems in the field of self-training and self-distillation.

Syllabus

Intro
Main Reference
Self-Training
Self-Distillation [Deep Learning]
Self-Distillation More Profound
Learning Functions in Hilbert Space
Unconstrained Form
Intuition
Closed Form Solution
Connections
Challenges
Power Iteration Analogy
Capacity Control
Generalization Guarantees
Revisiting Illustrative Example
Advantage of Near Interpolation
Early Stopping
Deep Learning Experiments
Open Problems


Taught by

MITCBMM

Related Courses

An Introduction to Functional Analysis
École Centrale Paris via Coursera
Физические основы квантовой информатики
National Research Nuclear University MEPhI via edX
FUNCTIONAL ANALYSIS
IMSC via Swayam
Foundations of Quantum Mechanics
University of Colorado Boulder via Coursera
Mathematical Methods for Data Analysis
The Hong Kong University of Science and Technology via edX