YoVDO

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Offered By: Simons Institute via YouTube

Tags

Supervised Learning Courses

Course Description

Overview

Explore the concept of disentangled representations in auditing model predictions through this 27-minute lecture by Charlie Marx from Haverford College. Delve into recent developments in fairness research, examining direct and indirect influence, independent settings, and supervised learning processes. Understand the driving methodology behind disentangled representations, including independent factors, group actions, and world states. Investigate experimental results, focusing on reconstruction and prediction errors, feature audits, and shape-scale-orientation relationships. Analyze the limitations of this approach and engage with thought-provoking questions in the field of machine learning fairness.

Syllabus

Introduction
Background
General Question
Direct and Indirect Influence
Goal
Driving Methodology
Independent Settings
Independent Factors
Group Actions
Independent World States
Indirect Influence
Supervised Learning
Combining Processes
Independent Features
Autoencoder
Experimental Results
Reconstruction Error
Prediction Error
Feature Audit
Shape Scale Orientation
Recurse
Limitations
Questions


Taught by

Simons Institute

Related Courses

Machine Learning
University of Washington via Coursera
Machine Learning
Stanford University via Coursera
Machine Learning
Georgia Institute of Technology via Udacity
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity