Gender Shades - Intersectional Accuracy Disparities in Commercial Gender Classification
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Watch a thought-provoking conference talk from FAT* 2018 where Joy Buolamwini presents groundbreaking research on intersectional accuracy disparities in commercial gender classification systems. Explore the motivations behind the study, understand the benchmarking process, and delve into the evaluation of gender classification accuracy across different skin types and genders. Learn about the responses from major tech companies like Microsoft, Face Plus, and IBM to the findings. Gain insights into the importance of intersectionality in AI, the dangers of biased data, and the ethical implications of using such technology. Engage with key takeaways and participate in a Q&A session addressing critical questions about the Fitzpatrick scale and the broader implications of this research for the field of artificial intelligence and society at large.
Syllabus
Introduction
Motivation
Background
Gender Classification
Benchmarks
Labeling
Benchmark Limitations
Overall Accuracy
Accuracy by Gender
Accuracy on Skin Type
Intersectional Evaluation for Gender Classification
Microsoft
Face Plus
IBM
Companies Response
Microsoft Response
IBM Response
Key takeaways
Intersectionality matters
The dangers of supremely white data
How is this technology used
Is this a good thing
How this technology is used
Quick question
Question
Question Fitzpatrick
Conclusion
Taught by
ACM FAccT Conference
Related Courses
Data for Machine LearningAlberta Machine Intelligence Institute via Coursera Microsoft Future Ready: Ethics and Laws in Data and Analytics
Cloudswyft via FutureLearn AI Strategy and Governance
University of Pennsylvania via Coursera Preparar datos para la exploración
Google via Coursera Daten für die Erkundung Vorbereiten
Google via Coursera