Why Do Our Models Learn?
Offered By: MITCBMM via YouTube
Course Description
Overview
Syllabus
Intro
Machine Learning Can Be Unreliable
Indeed: Machine Learning is Brittle
Backdoor Attacks
Key problem: Our models are merely (excellent!) correlation extractors Cats
Indeed: Correlations can be weird
Simple Setting: Background bias
Do Backgrounds Contain Signal?
ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
Adversarial Backgrounds
Background-Robust Models?
How Are Datasets Created?
Dataset Creation in Practice
Consequence: Benchmark-Task Misalignment
Prerequisite: Detailed Annotations
Ineffective Data Filtering
Multiple objects
Human-Label Disagreement
Human-Based Evaluation
Human vs ML Model Priors
Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
Consequence: Interpretability
Consequence: Training Modifications
Robustness + Perception Alignment
Robustness + Better Representations
Counterfactual Analysis with Robust Models
ML Research Pipeline
Taught by
MITCBMM
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Digital Signal Processing
École Polytechnique Fédérale de Lausanne via Coursera Creative, Serious and Playful Science of Android Apps
University of Illinois at Urbana-Champaign via Coursera