YoVDO

Random Initialization and Implicit Regularization in Nonconvex Statistical Estimation - Lecture 2

Offered By: Georgia Tech Research via YouTube

Tags

Nonconvex Optimization Courses Gradient Descent Courses Implicit Regularization Courses Matrix Completion Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the second lecture in a five-part series featuring Princeton University's Yuxin Chen, focusing on random initialization and implicit regularization in nonconvex statistical estimation. Delve into the phenomenon where gradient descent converges to optimal solutions in nonconvex problems like phase retrieval and matrix completion, achieving near-optimal statistical and computational guarantees without careful initialization or explicit regularization. Examine the leave-one-out approach used to decouple statistical dependency between gradient descent iterates and data. Learn about the application of this method to noisy matrix completion, demonstrating near-optimal entrywise error control. Investigate topics such as low-rank matrix recovery, quadratic systems of equations, two-stage approaches, population-level state evolution, and automatic saddle avoidance in this 48-minute talk from the TRIAD Distinguished Lecture Series at Georgia Tech Research.

Syllabus

Intro
Statistical models come to rescue
Example: low-rank matrix recovery
Solving quadratic systems of equations
A natural least squares formulation
Rationale of two-stage approach
What does prior theory say?
Exponential growth of signal strength in Stage 1
Our theory: noiseless case
Population-level state evolution
Back to finite-sample analysis
Gradient descent theory revisited
A second look at gradient descent theory
Key proof idea: leave-one-out analysis
Key proof ingredient: random-sign sequences
Automatic saddle avoidance


Taught by

Georgia Tech Research

Related Courses

Practical Predictive Analytics: Models and Methods
University of Washington via Coursera
Deep Learning Fundamentals with Keras
IBM via edX
Introduction to Machine Learning
Duke University via Coursera
Intro to Deep Learning with PyTorch
Facebook via Udacity
Introduction to Machine Learning for Coders!
fast.ai via Independent