YoVDO

Adversarial Examples and Human-ML Alignment

Offered By: MITCBMM via YouTube

Tags

Machine Learning Courses Data Analysis Courses Deep Learning Courses Interpretability Courses

Course Description

Overview

Explore the concept of adversarial examples and human-machine learning alignment in this lecture by Aleksander Madry from MIT. Delve into the comparison between deep networks and human vision, examining the natural perspective on adversarial examples. Investigate why adversarial perturbations are problematic from both human and machine learning viewpoints. Analyze the robust features model and its implications for interpretability, training modifications, and robustness tradeoffs. Discover how robustness relates to perception alignment and improved representations. Address the challenge of unusual correlations in data and learn about counterfactual analysis using robust models. Gain insights into the origin of adversarial examples stemming from non-robust features in datasets.

Syllabus

Adversarial Examples and Human-ML Alignment Aleksander Madry
Deep Networks: Towards Human Vision?
A Natural View on Adversarial Examples
Why Are Adv. Perturbations Bad?
Human Perspective
ML Perspective
The Robust Features Model
The Simple Experiment: A Second Look
Human vs ML Model Priors
In fact, models...
Consequence: Interpretability
Consequence: Training Modifications
Consequence: Robustness Tradeoffs
Robustness + Perception Alignment
Robustness + Better Representations
Problem: Correlations can be weird
"Counterfactual" Analysis with Robust Models
Adversarial examples arise from non-robust features in the data


Taught by

MITCBMM

Related Courses

4.0 Shades of Digitalisation for the Chemical and Process Industries
University of Padova via FutureLearn
A Day in the Life of a Data Engineer
Amazon Web Services via AWS Skill Builder
FinTech for Finance and Business Leaders
ACCA via edX
Accounting Data Analytics
University of Illinois at Urbana-Champaign via Coursera
Accounting Data Analytics
Coursera