Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore a 24-minute lecture on agnostic proper learning of monotone Boolean functions, presented by Jane Lange from the Massachusetts Institute of Technology at the Simons Institute. Delve into the first efficient algorithm for this problem, which outputs a monotone hypothesis (opt+ε)-close to an unknown function using 2^Õ(sqrt(n)/ε) uniformly random examples. Discover how this algorithm nearly matches the lower bound established by Blais et al. Learn about a related algorithm for estimating the distance of an unknown function to monotone within additive error ε. Understand how this work bridges the gap between run-time and sample complexity in previous algorithms. Examine the innovative approach that overcomes the black-box correction barrier by enhancing the improper learner with convex optimization and addressing real-valued functions before Boolean rounding. Gain insights into the "poset sorting" problem for functions over general posets with non-Boolean labels.
Syllabus
Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier
Taught by
Simons Institute
Related Courses
Sampling-Based Sublinear Low-Rank Matrix Arithmetic Framework for Dequantizing Quantum Machine LearningAssociation for Computing Machinery (ACM) via YouTube Sublinear Algorithms for Gap Edit Distance
IEEE via YouTube High Dimensional Robust Sparse Regression
Simons Institute via YouTube Learning-Augmented Sketches for Frequency Estimation
Simons Institute via YouTube Adaptive Sparse Recovery with Limited Adaptivity
Simons Institute via YouTube