YoVDO

The Power of Nonconvex Optimization in Solving Random Quadratic Systems of Equations - Lecture 1

Offered By: Georgia Tech Research via YouTube

Tags

Nonconvex Optimization Courses Computational Complexity Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the power of nonconvex optimization in solving random quadratic systems of equations in this lecture from the TRIAD Distinguished Lecture Series. Delivered by Princeton University's Yuxin Chen, delve into the effectiveness of convex relaxation techniques and their computational limitations. Discover an alternative approach using nonconvex programs and learn how these algorithms can return correct solutions in linear time under certain conditions. Examine the extension of this theory to noisy systems and understand how the algorithms achieve minimax optimal statistical accuracy. Gain insights into the computational efficiency of this method compared to traditional least-squares problems. Follow the lecture's progression from introduction to practical applications, covering topics such as phase retrieval in imaging science, latent variable models, and neural network learning with quadratic activation. Explore concepts like low-rank factorization, maximum likelihood estimation, spectral initialization, and iterative refinement techniques. Conclude with an analysis of performance guarantees and stability under noisy data conditions.

Syllabus

Intro
Nonconvex optimization may be super scary
Example: solving quadratic programs is hard
Example of convex surrogate: low-rank matrix completion
Example of lifting: Max-Cut
Solving quadratic systems of equations
Motivation: a missing phase problem in imaging science
Motivation: latent variable models
Motivation: learning neural nets with quadratic activation
An equivalent view: low-rank factorization
Prior art (before our work)
A first impulse: maximum likelihood estimate
Interpretation of spectral initialization
Empirical performance of initialization (m = 12n)
Improving initialization
Iterative refinement stage: search directions
Performance guarantees of TWF (noiseless data)
Computational complexity
Numerical surprise
Stability under noisy data


Taught by

Georgia Tech Research

Related Courses

On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic
Simons Institute via YouTube
Optimisation - An Introduction: Professor Coralia Cartis, University of Oxford
Alan Turing Institute via YouTube
Optimization in Signal Processing and Machine Learning
IEEE Signal Processing Society via YouTube
Methods for L_p-L_q Minimization in Image Restoration and Regression - SIAM-IS Seminar
Society for Industrial and Applied Mathematics via YouTube
Certificates of Nonnegativity and Their Applications in Theoretical Computer Science
Society for Industrial and Applied Mathematics via YouTube