Surprising Phenomena of Max-LP-Margin Classifiers in High Dimensions
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a 53-minute conference talk by Fanny Yang from ETH Zurich on "Surprising phenomena of max-lp-margin classifiers in high dimensions" presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the analysis of max-lp-margin classifiers, examining their relevance to the implicit bias of first-order methods and harmless interpolation in neural networks. Discover unexpected findings in the noiseless case, where minimizing l1-norm achieves optimal rates for regression with hard-sparse ground truths, but this adaptivity doesn't directly apply to max l1-margin classification. Investigate how max-lp-margin classifiers can achieve 1/√n rates for p slightly larger than one in noisy observations, while maximum l1-margin classifiers only achieve rates of order 1/√(log(d/n)). Gain insights into cutting-edge research in machine learning theory and its implications for deep learning practices.
Syllabus
Fanny Yang - Surprising phenomena of max-lp-margin classifiers in high dimensions - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent