Know When You Know: Handling Adversarial Data by Abstaining
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore a cutting-edge approach to sequential prediction in stochastic settings with adversarial interference through this one-hour lecture by Surbhi Goel from the University of Pennsylvania. Delve into the challenges posed by clean-label adversarial examples and distribution shifts, which can compromise traditional algorithms' effectiveness. Discover a novel framework that empowers learners to abstain from predictions on adversarial injections without penalty, ensuring predictions are made only with certainty. Learn about algorithms designed within this new model that maintain the guarantees of purely stochastic settings, even when faced with numerous adversarial examples. Conclude by examining intriguing open questions raised by this innovative framework, offering insights into the future of machine learning and data analysis in adversarial environments.
Syllabus
Know When You Know: Handling Adversarial Data by Abstaining
Taught by
Simons Institute
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent