YoVDO

Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier

Offered By: Simons Institute via YouTube

Tags

Computational Learning Theory Courses Graph Theory Courses Convex Optimization Courses Sublinear Algorithms Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 24-minute lecture on agnostic proper learning of monotone Boolean functions, presented by Jane Lange from the Massachusetts Institute of Technology at the Simons Institute. Delve into the first efficient algorithm for this problem, which outputs a monotone hypothesis (opt+ε)-close to an unknown function using 2^Õ(sqrt(n)/ε) uniformly random examples. Discover how this algorithm nearly matches the lower bound established by Blais et al. Learn about a related algorithm for estimating the distance of an unknown function to monotone within additive error ε. Understand how this work bridges the gap between run-time and sample complexity in previous algorithms. Examine the innovative approach that overcomes the black-box correction barrier by enhancing the improper learner with convex optimization and addressing real-valued functions before Boolean rounding. Gain insights into the "poset sorting" problem for functions over general posets with non-Boolean labels.

Syllabus

Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier


Taught by

Simons Institute

Related Courses

Machine Learning 1—Supervised Learning
Brown University via Udacity
Computational Learning Theory and Beyond
openHPI
Leslie G. Valiant - Turing Award Lecture 2010
Association for Computing Machinery (ACM) via YouTube
Learning of Neural Networks with Quantum Computers and Learning of Quantum States with Graphical Models
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension
Institute for Pure & Applied Mathematics (IPAM) via YouTube