YoVDO

The Power of Nonconvex Optimization in Solving Random Quadratic Systems of Equations - Lecture 1

Offered By: Georgia Tech Research via YouTube

Tags

Nonconvex Optimization Courses Computational Complexity Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the power of nonconvex optimization in solving random quadratic systems of equations in this lecture from the TRIAD Distinguished Lecture Series. Delivered by Princeton University's Yuxin Chen, delve into the effectiveness of convex relaxation techniques and their computational limitations. Discover an alternative approach using nonconvex programs and learn how these algorithms can return correct solutions in linear time under certain conditions. Examine the extension of this theory to noisy systems and understand how the algorithms achieve minimax optimal statistical accuracy. Gain insights into the computational efficiency of this method compared to traditional least-squares problems. Follow the lecture's progression from introduction to practical applications, covering topics such as phase retrieval in imaging science, latent variable models, and neural network learning with quadratic activation. Explore concepts like low-rank factorization, maximum likelihood estimation, spectral initialization, and iterative refinement techniques. Conclude with an analysis of performance guarantees and stability under noisy data conditions.

Syllabus

Intro
Nonconvex optimization may be super scary
Example: solving quadratic programs is hard
Example of convex surrogate: low-rank matrix completion
Example of lifting: Max-Cut
Solving quadratic systems of equations
Motivation: a missing phase problem in imaging science
Motivation: latent variable models
Motivation: learning neural nets with quadratic activation
An equivalent view: low-rank factorization
Prior art (before our work)
A first impulse: maximum likelihood estimate
Interpretation of spectral initialization
Empirical performance of initialization (m = 12n)
Improving initialization
Iterative refinement stage: search directions
Performance guarantees of TWF (noiseless data)
Computational complexity
Numerical surprise
Stability under noisy data


Taught by

Georgia Tech Research

Related Courses

Automata Theory
Stanford University via edX
Introduction to Computational Thinking and Data Science
Massachusetts Institute of Technology via edX
算法设计与分析 Design and Analysis of Algorithms
Peking University via Coursera
How to Win Coding Competitions: Secrets of Champions
ITMO University via edX
Introdução à Ciência da Computação com Python Parte 2
Universidade de São Paulo via Coursera