YoVDO

Probing ML Models for Fairness with the What-if Tool and SHAP

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

ACM FAccT Conference Courses SHAP Courses

Course Description

Overview

Explore fairness in machine learning models through a comprehensive tutorial presented at FAT*2020 in Barcelona. Dive into the What-if Tool and SHAP (SHapley Additive exPlanations) to gain practical insights on assessing and improving model fairness. Learn from Google experts James Wexler and Andrew Zaldivar as they demonstrate interactive techniques for probing ML models. Watch an in-depth demo showcasing real-world applications, and access accompanying slides for further study. Enhance your understanding of ethical AI practices and develop skills to create more equitable machine learning systems.

Syllabus

Probing ML models for fairness with the What-if Tool and SHAP


Taught by

ACM FAccT Conference

Related Courses

Machine Learning Interpretable: SHAP, PDP y permutacion
Coursera Project Network via Coursera
Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
LinkedIn Learning
Guided Project: Predict World Cup Soccer Results with ML
IBM via edX
What is Interpretable Machine Learning - ML Explainability - with Python LIME Shap Tutorial
1littlecoder via YouTube
How Can I Explain This to You? An Empirical Study of Deep Neural Net Explanation Methods - Spring 2021
University of Central Florida via YouTube