YoVDO

Explainable ML in the Wild - When Not to Trust Your Explanations

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

ACM FAccT Conference Courses Data Science Courses AI Ethics Courses

Course Description

Overview

Dive into a comprehensive tutorial on the limitations and potential pitfalls of explainable machine learning. Explore real-world scenarios where explanations may be unreliable, presented by experts Shalmali Joshi, Chirag Agarwal, and Himabindu Lakkaraju from Harvard. Learn to critically evaluate and interpret machine learning explanations, understanding when to exercise caution in trusting them. Gain insights into the challenges of implementing explainable ML in practical applications and discover strategies for more robust and trustworthy AI systems. This 85-minute session, part of the FAccT 2021 conference, equips data scientists, researchers, and AI practitioners with essential knowledge for responsible and ethical deployment of explainable machine learning techniques.

Syllabus

Tutorial: Explainable ML in the Wild: When Not to Trust Your Explanations


Taught by

ACM FAccT Conference

Related Courses

Translation Tutorial - Thinking Through and Writing About Research Ethics Beyond "Broader Impact"
Association for Computing Machinery (ACM) via YouTube
Translation Tutorial - Data Externalities
Association for Computing Machinery (ACM) via YouTube
Translation Tutorial - Causal Fairness Analysis
Association for Computing Machinery (ACM) via YouTube
Implications Tutorial - Using Harms and Benefits to Ground Practical AI Fairness Assessments
Association for Computing Machinery (ACM) via YouTube
Responsible AI in Industry - Lessons Learned in Practice
Association for Computing Machinery (ACM) via YouTube