On the Foundations of Deep Learning - SGD, Overparametrization, and Generalization
Offered By: Simons Institute via YouTube
Course Description
Overview
Syllabus
Intro
Fundamental Questions
Challenges
What if the Landscape is Bad?
Gradient Descent Finds Global Minima
Idea: Study Dynamics of the Prediction
Local Geometry
Local vs Global Geometry
What about Generalization Error?
Does Overparametrization Hurt Generalization?
Background on Margin Theory
Max Margin via Logistic Loss
Intuition
Overparametrization Improves the Margin
Optimization with Regularizer
Comparison to NTK
Is Regularization Needed?
Warmup: Logistic Regression
What's Special About Gradient Descent?
Changing the Geometry: Steepest Descent
Steepest Descent: Examples
Beyond Linear Models: Deep Networks
Implicit Regularization: NTK vs Asymptotic
Does Architecture Matter?
Example: Changing the Depth in Linear Network
Example: Depth in Linear Convolutional Network
Random Thoughts
Taught by
Simons Institute
Related Courses
Launching into Machine Learning 日本語版Google Cloud via Coursera Launching into Machine Learning auf Deutsch
Google Cloud via Coursera Launching into Machine Learning en Français
Google Cloud via Coursera Launching into Machine Learning en Español
Google Cloud via Coursera Основы машинного обучения
Higher School of Economics via Coursera