The Mother of All Representer Theorems for Inverse Problems and Machine Learning - Michael Unser
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Syllabus
Intro
Variational formulation of inverse problem
Learning as a (linear) inverse problem
RKHS representer theorem for machine learning
Is there a mother of all representer theorems?
General notion of Banach space
Dual of a Banach space
Riesz conjugate for Hilbert spaces
Generalization: Duality mapping
Properties of duality mapping
Mother of all representer theorems (Cont'd)
Kernel methods for machine learning
Tikhonov regularization (see white board)
Qualitative effect of Banach conjugation
Sparsity promoting regularization
Extreme points
Geometry of 12 vs. l, minimization
Isometry with space of Radon measures
Sparse kernel expansions (Cont'd)
Special case: Translation-invariant kernels
RKHS vs. Sparse kernel expansions (LSI)
Conclusion (Cont'd)
Taught by
Alan Turing Institute
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent