YoVDO

Mitigating Bias in Models with SHAP and Fairlearn

Offered By: Linux Foundation via YouTube

Tags

Machine Learning Courses Fairness in AI Courses Ethical AI Courses Model Interpretability Courses SHAP Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore techniques for addressing bias in machine learning models through a comprehensive conference talk by Sean Owen from Databricks. Dive into the application of SHAP (SHapley Additive exPlanations) and Fairlearn, two powerful tools for identifying and mitigating bias in AI systems. Learn how these methods can enhance model interpretability, promote fairness, and improve overall model performance. Gain valuable insights into ethical AI practices and discover practical strategies for building more equitable and transparent machine learning solutions.

Syllabus

Mitigating Bias in Models with SHAP and Fairlearn - Sean Owen, Databricks


Taught by

Linux Foundation

Tags

Related Courses

Machine Learning Interpretable: SHAP, PDP y permutacion
Coursera Project Network via Coursera
Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
LinkedIn Learning
Guided Project: Predict World Cup Soccer Results with ML
IBM via edX
What is Interpretable Machine Learning - ML Explainability - with Python LIME Shap Tutorial
1littlecoder via YouTube
How Can I Explain This to You? An Empirical Study of Deep Neural Net Explanation Methods - Spring 2021
University of Central Florida via YouTube