Gender Shades - Intersectional Accuracy Disparities in Commercial Gender Classification
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Watch a thought-provoking conference talk from FAT* 2018 where Joy Buolamwini presents groundbreaking research on intersectional accuracy disparities in commercial gender classification systems. Explore the motivations behind the study, understand the benchmarking process, and delve into the evaluation of gender classification accuracy across different skin types and genders. Learn about the responses from major tech companies like Microsoft, Face Plus, and IBM to the findings. Gain insights into the importance of intersectionality in AI, the dangers of biased data, and the ethical implications of using such technology. Engage with key takeaways and participate in a Q&A session addressing critical questions about the Fitzpatrick scale and the broader implications of this research for the field of artificial intelligence and society at large.
Syllabus
Introduction
Motivation
Background
Gender Classification
Benchmarks
Labeling
Benchmark Limitations
Overall Accuracy
Accuracy by Gender
Accuracy on Skin Type
Intersectional Evaluation for Gender Classification
Microsoft
Face Plus
IBM
Companies Response
Microsoft Response
IBM Response
Key takeaways
Intersectionality matters
The dangers of supremely white data
How is this technology used
Is this a good thing
How this technology is used
Quick question
Question
Question Fitzpatrick
Conclusion
Taught by
ACM FAccT Conference
Related Courses
Women and the Civil Rights MovementUniversity of Maryland, College Park via Coursera Psychology of Political Activism: Women Changing the World
Smith College via edX Diversity and Social Justice in Social Work
University of Michigan via edX Literature, Culture and Media
Indian Institute of Technology Roorkee via Swayam Wage Work for Women Citizens: 1870-1920
Columbia University via edX