YoVDO

Optimizing for Interpretability in Deep Neural Networks - Mike Wu

Offered By: Stanford University via YouTube

Tags

Neural Networks Courses Artificial Intelligence Courses Health Care Courses Deep Learning Courses Distillation Courses Deep Neural Networks Courses Interpretability Courses

Course Description

Overview

Explore a novel approach to deep neural network interpretability in this 51-minute Stanford University lecture. Delve into the concept of regularizing deep models for better human understanding, focusing on medical prediction tasks in critical care and HIV treatment. Learn about the challenges of interpretability, various approaches to questioning models, and the idea of human simulation. Examine tree regularization techniques, including regional tree regularization, and their application to real-world datasets like MIMIC III. Discover how to evaluate interpretability metrics and understand the caveats of regularizing for interpretability. Gain insights into the speaker's research on deep generative models and unsupervised learning algorithms, with applications in education and healthcare.

Syllabus

Intro
The challenge of interpretability
Lots of different definitions and ideas
Asking the model questions
A conversation with the model
A case for human simulation
Simulatable?
Post-Hoc Analysis
Interpretability as a regularizer
Average Path Length
Problem Setup
Tree Regularization (Overview)
Toy Example for Intuition
Humans are context dependent
Regional Tree Regularization
Example: Three Kinds of Interpretability
MIMIC III Dataset
Evaluation Metrics
Results on MIMIC III
A second application: treatment for HIV
Distilled Decision Tree
Caveats and Gotchas
Regularizing for Interpretability


Taught by

Stanford MedAI

Tags

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube