On the Foundations of Deep Learning - SGD, Overparametrization, and Generalization
Offered By: Simons Institute via YouTube
Course Description
Overview
Syllabus
Intro
Fundamental Questions
Challenges
What if the Landscape is Bad?
Gradient Descent Finds Global Minima
Idea: Study Dynamics of the Prediction
Local Geometry
Local vs Global Geometry
What about Generalization Error?
Does Overparametrization Hurt Generalization?
Background on Margin Theory
Max Margin via Logistic Loss
Intuition
Overparametrization Improves the Margin
Optimization with Regularizer
Comparison to NTK
Is Regularization Needed?
Warmup: Logistic Regression
What's Special About Gradient Descent?
Changing the Geometry: Steepest Descent
Steepest Descent: Examples
Beyond Linear Models: Deep Networks
Implicit Regularization: NTK vs Asymptotic
Does Architecture Matter?
Example: Changing the Depth in Linear Network
Example: Depth in Linear Convolutional Network
Random Thoughts
Taught by
Simons Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX