YoVDO

Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models

Offered By: Toronto Machine Learning Series (TMLS) via YouTube

Tags

Explainable AI Courses Machine Learning Courses Criminal Justice Courses Ethics in AI Courses Algorithmic Fairness Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical implications of using black box machine learning models for high-stakes decisions in this thought-provoking 49-minute conference talk from the Toronto Machine Learning Series. Delve into the insights of Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University, as she challenges the widespread use of opaque ML models. Examine the serious societal consequences, including flawed bail and parole decisions in criminal justice, that arise from relying on these models. Discover why explanations for black box models can be unreliable and potentially misleading. Learn about the advantages of interpretable machine learning models, which provide inherent explanations faithful to their actual computations. Gain valuable perspectives on the importance of transparency and accountability in AI-driven decision-making processes for high-stakes scenarios.

Syllabus

Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models


Taught by

Toronto Machine Learning Series (TMLS)

Related Courses

Forensic Science and Criminal Justice
University of Leicester via FutureLearn
Crime, Justice and Society
The University of Sheffield via FutureLearn
From Crime to Punishment: an Introduction to Criminal Justice
University of York via FutureLearn
Criminalistics
Doctor Harisingh Gour Vishwavidyalaya, Sagar via Swayam
Preventing Gun Violence in America Teach-Out
University of Michigan via Coursera