YoVDO

Implicit and Explicit Regularization in Deep Neural Networks

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Deep Neural Networks Courses Supervised Learning Courses Control Theory Courses Stochastic Gradient Descent Courses Implicit Regularization Courses

Course Description

Overview

Explore the theoretical underpinnings of deep learning in this 37-minute lecture by Babak Hassibi from the California Institute of Technology. Delve into the success of deep neural networks, focusing on the crucial role of stochastic descent methods in achieving good solutions that generalize well. Connect learning algorithms like stochastic gradient descent (SGD) and stochastic mirror descent (SMD) to H-infinity control, explaining their convergence and implicit regularization behavior in over-parameterized scenarios. Gain insights into the "blessing of dimensionality" phenomenon and learn about a new algorithm, regularized SMD (RSMD), which offers superior generalization performance for noisy datasets. Examine topics such as supervised learning, local optimization, prediction error, Bregman divergence, and the distribution of weights in neural networks.

Syllabus

Introduction
Why is deep learning so popular
Why does deep learning not work
Supervised learning
Stochastic gradient descent
Local optimization
Prediction error
What we converge to
Implicit Regularization
Stochastic Mirror Descent
Bregman Divergence
Stochastic Mirror Descent Algorithm
Conventional Neural Networks
SMD
Summary
Nonlinear models
Blessing of dimensionality
Distribution of weights
Explicit regularization
Blessings of dimensionality


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Building Classification Models with scikit-learn
Pluralsight
Practical Deep Learning for Coders - Full Course
freeCodeCamp
Neural Networks Made Easy
Udemy
Intro to Deep Learning
Kaggle
Stochastic Gradient Descent
Great Learning via YouTube