Inverse Problems under a Learned Generative Prior - Lecture 1
Offered By: International Centre for Theoretical Sciences via YouTube
Course Description
Overview
Syllabus
Inverse Problems under a Learned Generative Prior Lecture 1
Examples of inverse problem
A common prior: sparsity
Sparsity can be optimized via a convex relaxation
Recovery guarantee for sparse signals
Generative models learn to impressively sample from complex signal classes
How are generative models used in inverse problems?
Generative models provide SOTA performance
Deep Compressive Sensing
Initial theory for generative priors analyzed global minimizers, which may be hard to find
Random generative priors allow rigorous recovery guarantees
Compressive sensing with random generative prior has favorable geometry for optimization
Proof Outline
Deterministic Condition for Recovery
Compressive sensing with random generative prior has a provably convergent subgradient descent algorithm
Guarantees for compressive sensing under generative priors have been extended to convolutional architectures
Why can generative models outperform sparsity models?
Sparsity appears to fail in Compressive Phase Retrieval
Our formulation: Deep Phase Retrieval
Generative priors can be efficient exploited for compressive phase retrieval
Comparison on MNIST
New workflow for scientists
Concrete steps have already been taken
Further Theory Needed
Main takeaways
Q&A
Taught by
International Centre for Theoretical Sciences
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX