Inverse Problems under a Learned Generative Prior - Lecture 1
Offered By: International Centre for Theoretical Sciences via YouTube
Course Description
Overview
Syllabus
Inverse Problems under a Learned Generative Prior Lecture 1
Examples of inverse problem
A common prior: sparsity
Sparsity can be optimized via a convex relaxation
Recovery guarantee for sparse signals
Generative models learn to impressively sample from complex signal classes
How are generative models used in inverse problems?
Generative models provide SOTA performance
Deep Compressive Sensing
Initial theory for generative priors analyzed global minimizers, which may be hard to find
Random generative priors allow rigorous recovery guarantees
Compressive sensing with random generative prior has favorable geometry for optimization
Proof Outline
Deterministic Condition for Recovery
Compressive sensing with random generative prior has a provably convergent subgradient descent algorithm
Guarantees for compressive sensing under generative priors have been extended to convolutional architectures
Why can generative models outperform sparsity models?
Sparsity appears to fail in Compressive Phase Retrieval
Our formulation: Deep Phase Retrieval
Generative priors can be efficient exploited for compressive phase retrieval
Comparison on MNIST
New workflow for scientists
Concrete steps have already been taken
Further Theory Needed
Main takeaways
Q&A
Taught by
International Centre for Theoretical Sciences
Related Courses
Survey of Music TechnologyGeorgia Institute of Technology via Coursera Fundamentals of Electrical Engineering Laboratory
Rice University via Coursera Critical Listening for Studio Production
Queen's University Belfast via FutureLearn Fundamentos de Comunicaciones Ópticas
Universitat Politècnica de València via UPV [X] Sense101x: Sense, Control, Act: Measure the Universe, Transform the World
University of Queensland via edX