YoVDO

Random Initialization and Implicit Regularization in Nonconvex Statistical Estimation - Lecture 2

Offered By: Georgia Tech Research via YouTube

Tags

Nonconvex Optimization Courses Gradient Descent Courses Implicit Regularization Courses Matrix Completion Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the second lecture in a five-part series featuring Princeton University's Yuxin Chen, focusing on random initialization and implicit regularization in nonconvex statistical estimation. Delve into the phenomenon where gradient descent converges to optimal solutions in nonconvex problems like phase retrieval and matrix completion, achieving near-optimal statistical and computational guarantees without careful initialization or explicit regularization. Examine the leave-one-out approach used to decouple statistical dependency between gradient descent iterates and data. Learn about the application of this method to noisy matrix completion, demonstrating near-optimal entrywise error control. Investigate topics such as low-rank matrix recovery, quadratic systems of equations, two-stage approaches, population-level state evolution, and automatic saddle avoidance in this 48-minute talk from the TRIAD Distinguished Lecture Series at Georgia Tech Research.

Syllabus

Intro
Statistical models come to rescue
Example: low-rank matrix recovery
Solving quadratic systems of equations
A natural least squares formulation
Rationale of two-stage approach
What does prior theory say?
Exponential growth of signal strength in Stage 1
Our theory: noiseless case
Population-level state evolution
Back to finite-sample analysis
Gradient descent theory revisited
A second look at gradient descent theory
Key proof idea: leave-one-out analysis
Key proof ingredient: random-sign sequences
Automatic saddle avoidance


Taught by

Georgia Tech Research

Related Courses

Training More Effective Learned Optimizers, and Using Them to Train Themselves - Paper Explained
Yannic Kilcher via YouTube
Understanding Deep Learning Requires Rethinking Generalization
University of Central Florida via YouTube
Implicit Regularization I
Simons Institute via YouTube
Benign Overfitting - Peter Bartlett, UC Berkeley
Alan Turing Institute via YouTube
Big Data Is Low Rank - Madeleine Udell, Cornell University
Alan Turing Institute via YouTube