Quantifying and Reducing Gender Stereotypes in Word Embeddings
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore gender stereotypes in word embeddings and learn techniques to quantify and reduce bias in this hands-on tutorial from the FAT* 2018 conference. Dive into the basics of word embedding learning and applications, then gain practical experience writing programs to display and measure gender stereotypes in these widely-used natural language processing tools. Discover methods to mitigate bias and create fairer algorithmic decision-making processes. Work with iPython notebooks to explore real-world examples and complete exercises that reinforce concepts of fairness in machine learning and natural language processing.
Syllabus
FAT* 2018 Hands-on Tutorial: Quantifying and Reducing Gender Stereotypes in Word Embeddings
Taught by
ACM FAccT Conference
Related Courses
Towards an Ethical Digital Society: From Theory to PracticeNPTEL via Swayam Introduction to the Theory of Computing - Stanford
Stanford University via YouTube Fairness in Medical Algorithms - Threats and Opportunities
Open Data Science via YouTube Fairness in Representation Learning - Natalie Dullerud
Stanford University via YouTube Privacy Governance and Explainability in ML - AI
Strange Loop Conference via YouTube