Why Do Our Models Learn?
Offered By: MITCBMM via YouTube
Course Description
Overview
Syllabus
Intro
Machine Learning Can Be Unreliable
Indeed: Machine Learning is Brittle
Backdoor Attacks
Key problem: Our models are merely (excellent!) correlation extractors Cats
Indeed: Correlations can be weird
Simple Setting: Background bias
Do Backgrounds Contain Signal?
ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
Adversarial Backgrounds
Background-Robust Models?
How Are Datasets Created?
Dataset Creation in Practice
Consequence: Benchmark-Task Misalignment
Prerequisite: Detailed Annotations
Ineffective Data Filtering
Multiple objects
Human-Label Disagreement
Human-Based Evaluation
Human vs ML Model Priors
Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
Consequence: Interpretability
Consequence: Training Modifications
Robustness + Perception Alignment
Robustness + Better Representations
Counterfactual Analysis with Robust Models
ML Research Pipeline
Taught by
MITCBMM
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent