YoVDO

Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier

Offered By: Simons Institute via YouTube

Tags

Computational Learning Theory Courses Graph Theory Courses Convex Optimization Courses Sublinear Algorithms Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 24-minute lecture on agnostic proper learning of monotone Boolean functions, presented by Jane Lange from the Massachusetts Institute of Technology at the Simons Institute. Delve into the first efficient algorithm for this problem, which outputs a monotone hypothesis (opt+ε)-close to an unknown function using 2^Õ(sqrt(n)/ε) uniformly random examples. Discover how this algorithm nearly matches the lower bound established by Blais et al. Learn about a related algorithm for estimating the distance of an unknown function to monotone within additive error ε. Understand how this work bridges the gap between run-time and sample complexity in previous algorithms. Examine the innovative approach that overcomes the black-box correction barrier by enhancing the improper learner with convex optimization and addressing real-valued functions before Boolean rounding. Gain insights into the "poset sorting" problem for functions over general posets with non-Boolean labels.

Syllabus

Agnostic Proper Learning of Monotone Functions: Beyond the Black-Box Correction Barrier


Taught by

Simons Institute

Related Courses

Aplicaciones de la teoría de grafos a la vida real
Miríadax
Aplicaciones de la Teoría de Grafos a la vida real
Universitat Politècnica de València via UPV [X]
Introduction to Computational Thinking and Data Science
Massachusetts Institute of Technology via edX
Genome Sequencing (Bioinformatics II)
University of California, San Diego via Coursera
Algorithmic Information Dynamics: From Networks to Cells
Santa Fe Institute via Complexity Explorer