YoVDO

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

ACM FAccT Conference Courses Machine Learning Courses Image Processing Courses Research Methodology Courses Bias Analysis Courses

Course Description

Overview

Explore a 20-minute conference talk from the FAccT 2021 virtual event that delves into the human-like biases present in image representations learned through unsupervised pre-training. Presented by R. Steed and A. Caliskan, this research-focused presentation covers the Implicit Association Test, methodologies employed, and key findings. Gain insights into image generation techniques and future directions in this field. Understand how unsupervised machine learning models can inadvertently incorporate societal biases, mirroring human prejudices in visual representations.

Syllabus

Introduction
Implicit Association Test
Methods
Results
Key Observations
Image Generation
What Next


Taught by

ACM FAccT Conference

Related Courses

Translation Tutorial - Thinking Through and Writing About Research Ethics Beyond "Broader Impact"
Association for Computing Machinery (ACM) via YouTube
Translation Tutorial - Data Externalities
Association for Computing Machinery (ACM) via YouTube
Translation Tutorial - Causal Fairness Analysis
Association for Computing Machinery (ACM) via YouTube
Implications Tutorial - Using Harms and Benefits to Ground Practical AI Fairness Assessments
Association for Computing Machinery (ACM) via YouTube
Responsible AI in Industry - Lessons Learned in Practice
Association for Computing Machinery (ACM) via YouTube