Picking on the Same Person - Does Algorithmic Monoculture Homogenize Outcomes?
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the ethical implications of algorithmic monoculture in high-stakes decision-making processes. Delve into Kathleen Creel's lecture from Stanford University, which examines how using the same machine learning model across various settings can amplify biases and lead to consistent mistreatment of individuals. Learn about the formalization of outcome homogenization, experiments conducted on US census data, and the ethical arguments surrounding this phenomenon. Gain insights into the potential consequences for democracy, autonomy, and fairness, as well as the concept of fairness gerrymandering. Understand the risks associated with shared training data and foundation models, and consider the broader implications for algorithmic decision-making in society.
Syllabus
Introduction
Project Overview
Case Scenario
Algorithmic Monoculture
Same Data Sets
Data Sets
Foundation Models
Name Artifacts
Name Sentiment
Question
Key Findings
Systemic Failure
Formalizing the Metric
Looking at Census Records
Facial Recognition Data
Ethical Dimension
Is there a tradeoff
Is this discrimination
Federally protected categories
Homogenation and bias
Fairness gerrymandering
Contractualism
Effect on Democracy
Effect on Autonomy
Threshold
Wallser
Conclusion
Questions
Discrimination
Application
Risks
Taught by
Stanford HAI
Tags
Related Courses
Data AnalysisJohns Hopkins University via Coursera Computing for Data Analysis
Johns Hopkins University via Coursera Scientific Computing
University of Washington via Coursera Introduction to Data Science
University of Washington via Coursera Web Intelligence and Big Data
Indian Institute of Technology Delhi via Coursera