Why Do Our Models Learn?
Offered By: MITCBMM via YouTube
Course Description
Overview
Syllabus
Intro
Machine Learning Can Be Unreliable
Indeed: Machine Learning is Brittle
Backdoor Attacks
Key problem: Our models are merely (excellent!) correlation extractors Cats
Indeed: Correlations can be weird
Simple Setting: Background bias
Do Backgrounds Contain Signal?
ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
Adversarial Backgrounds
Background-Robust Models?
How Are Datasets Created?
Dataset Creation in Practice
Consequence: Benchmark-Task Misalignment
Prerequisite: Detailed Annotations
Ineffective Data Filtering
Multiple objects
Human-Label Disagreement
Human-Based Evaluation
Human vs ML Model Priors
Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
Consequence: Interpretability
Consequence: Training Modifications
Robustness + Perception Alignment
Robustness + Better Representations
Counterfactual Analysis with Robust Models
ML Research Pipeline
Taught by
MITCBMM
Related Courses
Data AnalysisJohns Hopkins University via Coursera Computing for Data Analysis
Johns Hopkins University via Coursera Scientific Computing
University of Washington via Coursera Introduction to Data Science
University of Washington via Coursera Web Intelligence and Big Data
Indian Institute of Technology Delhi via Coursera