YoVDO

Surprising Phenomena of Max-LP-Margin Classifiers in High Dimensions

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Machine Learning Courses Neural Networks Courses Interpolation Courses Classification Courses High-dimensional Statistics Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 53-minute conference talk by Fanny Yang from ETH Zurich on "Surprising phenomena of max-lp-margin classifiers in high dimensions" presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the analysis of max-lp-margin classifiers, examining their relevance to the implicit bias of first-order methods and harmless interpolation in neural networks. Discover unexpected findings in the noiseless case, where minimizing l1-norm achieves optimal rates for regression with hard-sparse ground truths, but this adaptivity doesn't directly apply to max l1-margin classification. Investigate how max-lp-margin classifiers can achieve 1/√n rates for p slightly larger than one in noisy observations, while maximum l1-margin classifiers only achieve rates of order 1/√(log(d/n)). Gain insights into cutting-edge research in machine learning theory and its implications for deep learning practices.

Syllabus

Fanny Yang - Surprising phenomena of max-lp-margin classifiers in high dimensions - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent