YoVDO

Quantifying and Reducing Gender Stereotypes in Word Embeddings

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

ACM FAccT Conference Courses Programming Courses Data Analysis Courses Machine Learning Courses Word Embeddings Courses Algorithmic Fairness Courses

Course Description

Overview

Explore gender stereotypes in word embeddings and learn techniques to quantify and reduce bias in this hands-on tutorial from the FAT* 2018 conference. Dive into the basics of word embedding learning and applications, then gain practical experience writing programs to display and measure gender stereotypes in these widely-used natural language processing tools. Discover methods to mitigate bias and create fairer algorithmic decision-making processes. Work with iPython notebooks to explore real-world examples and complete exercises that reinforce concepts of fairness in machine learning and natural language processing.

Syllabus

FAT* 2018 Hands-on Tutorial: Quantifying and Reducing Gender Stereotypes in Word Embeddings


Taught by

ACM FAccT Conference

Related Courses

A Bayesian Model of Cash Bail Decisions
Association for Computing Machinery (ACM) via YouTube
A Pilot Study in Surveying Clinical Judgments to Evaluate Radiology Report Generation
Association for Computing Machinery (ACM) via YouTube
A Review of Taxonomies of Explainable Artificial Intelligence - XAI Methods
Association for Computing Machinery (ACM) via YouTube
A Semiotics-Based Epistemic Tool to Reason About Ethical Issues in Digital Technology Design and Development
Association for Computing Machinery (ACM) via YouTube
A Statistical Test for Probabilistic Fairness
Association for Computing Machinery (ACM) via YouTube