Interpolation in Learning - Steps Towards Understanding When Overparameterization Is Harmless, When It Helps, and When It Causes Harm
Offered By: Institute for Advanced Study via YouTube
Course Description
Overview
Explore the intricacies of interpolation in machine learning through this comprehensive seminar on Theoretical Machine Learning. Delve into the complex topic of "Interpolation in learning: steps towards understanding when overparameterization is harmless, when it helps, and when it causes harm" presented by Anant Sahai from the University of California, Berkeley. Gain insights into basic principles, double descent phenomena, interpretation regime, and ill climate talk. Examine lower bounds, visualizations, and the paradigm attic concept. Investigate aliasing and aliases, develop intuition through matrix interpretations, and understand minimum 2 norms. Discover the reasons behind these concepts and explore relevant examples throughout this 1 hour and 23 minutes long presentation from the Institute for Advanced Study.
Syllabus
Introduction
Basic principle
Double descent phenomena
Interpretation regime
Ill climate talk
Lower bound
Visualizations
Is this paradigm attic
Aliasing and aliases
Intuition
Matrix Intuition
Minimum 2 Norms
Why
Examples
Taught by
Institute for Advanced Study
Related Courses
Latent State Recovery in Reinforcement Learning - John LangfordInstitute for Advanced Study via YouTube On the Critic Function of Implicit Generative Models - Arthur Gretton
Institute for Advanced Study via YouTube Priors for Semantic Variables - Yoshua Bengio
Institute for Advanced Study via YouTube Instance-Hiding Schemes for Private Distributed Learning
Institute for Advanced Study via YouTube Learning Probability Distributions - What Can, What Can't Be Done - Shai Ben-David
Institute for Advanced Study via YouTube