Removing Spurious Features Can Hurt Accuracy and Affect Groups Disproportionately
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore a thought-provoking conference talk that delves into the unintended consequences of removing spurious features in machine learning models. Examine how this practice, often aimed at improving model performance, can paradoxically lead to decreased accuracy and disproportionately affect certain groups. Through a comprehensive analysis of various datasets and experimental setups, gain insights into the complex relationship between feature selection, model accuracy, and fairness in AI systems. Understand the implications of these findings for developing more robust and equitable machine learning algorithms, and consider the broader ethical considerations in AI research and development.
Syllabus
Introduction
Accuracy Drop
Setup
Data Sets
Results
Other Results
Conclusion
Taught by
ACM FAccT Conference
Related Courses
Can Algorithms Bend the Arc Toward Justice?Santa Fe Institute via YouTube Measurement and Fairness
Association for Computing Machinery (ACM) via YouTube Chasing Your Long Tails - Differentially Private Prediction in Health Care Settings
Association for Computing Machinery (ACM) via YouTube One Label, One Billion Faces - Usage and Consistency of Racial Categories in Computer Vision
Association for Computing Machinery (ACM) via YouTube Group Fairness - Independence Revisited
Association for Computing Machinery (ACM) via YouTube