YoVDO

Understanding Deep Learning Requires Rethinking Generalization

Offered By: University of Central Florida via YouTube

Tags

Deep Learning Courses Data Augmentation Courses Implicit Regularization Courses

Course Description

Overview

Explore the intricacies of deep learning and challenge conventional wisdom on generalization in this 40-minute lecture from the University of Central Florida. Delve into topics such as the Universal Approximation Theorem, L2 Regularization, Dropout, and Data Augmentation. Examine randomization tests and their results, leading to thought-provoking conclusions and implications. Investigate explicit and implicit regularization techniques, finite-sample expressivity of neural networks, and draw comparisons to linear models. Conclude by analyzing the role of Stochastic Gradient Descent (SGD) in deep learning, ultimately reshaping your understanding of generalization in neural networks.

Syllabus

Intro
Presentation Outline
Universal Approximation Theorem
L2 Regularization - "Weight Decay"
Dropout
Data Augmentation
Rondomization Tests
Results of Randomization Tests
Conclusions & Impications
Explicit Regularization Tests
Implicit Regularization Findings
Finite-Sample Expressivity of Neural Networks
Appeal to Linear Models
Investigating SGD
Final Conclusions


Taught by

UCF CRCV

Tags

Related Courses

Training More Effective Learned Optimizers, and Using Them to Train Themselves - Paper Explained
Yannic Kilcher via YouTube
Implicit Regularization I
Simons Institute via YouTube
Benign Overfitting - Peter Bartlett, UC Berkeley
Alan Turing Institute via YouTube
Big Data Is Low Rank - Madeleine Udell, Cornell University
Alan Turing Institute via YouTube
Beyond Lazy Training for Over-parameterized Tensor Decomposition
Fields Institute via YouTube