Surprising Phenomena of Max-LP-Margin Classifiers in High Dimensions
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a 53-minute conference talk by Fanny Yang from ETH Zurich on "Surprising phenomena of max-lp-margin classifiers in high dimensions" presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the analysis of max-lp-margin classifiers, examining their relevance to the implicit bias of first-order methods and harmless interpolation in neural networks. Discover unexpected findings in the noiseless case, where minimizing l1-norm achieves optimal rates for regression with hard-sparse ground truths, but this adaptivity doesn't directly apply to max l1-margin classification. Investigate how max-lp-margin classifiers can achieve 1/√n rates for p slightly larger than one in noisy observations, while maximum l1-margin classifiers only achieve rates of order 1/√(log(d/n)). Gain insights into cutting-edge research in machine learning theory and its implications for deep learning practices.
Syllabus
Fanny Yang - Surprising phenomena of max-lp-margin classifiers in high dimensions - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
K-Means and K-Medians Under Dimension ReductionSimons Institute via YouTube Can Non-Convex Optimization Be Robust?
Simons Institute via YouTube Robust Estimation and Generative Adversarial Nets
Simons Institute via YouTube Invariance, Causality and Novel Robustness
Simons Institute via YouTube The Importance of Better Models in Stochastic Optimization
Simons Institute via YouTube