Optimizing for Interpretability in Deep Neural Networks - Mike Wu
Offered By: Stanford University via YouTube
Course Description
Overview
Explore a novel approach to deep neural network interpretability in this 51-minute Stanford University lecture. Delve into the concept of regularizing deep models for better human understanding, focusing on medical prediction tasks in critical care and HIV treatment. Learn about the challenges of interpretability, various approaches to questioning models, and the idea of human simulation. Examine tree regularization techniques, including regional tree regularization, and their application to real-world datasets like MIMIC III. Discover how to evaluate interpretability metrics and understand the caveats of regularizing for interpretability. Gain insights into the speaker's research on deep generative models and unsupervised learning algorithms, with applications in education and healthcare.
Syllabus
Intro
The challenge of interpretability
Lots of different definitions and ideas
Asking the model questions
A conversation with the model
A case for human simulation
Simulatable?
Post-Hoc Analysis
Interpretability as a regularizer
Average Path Length
Problem Setup
Tree Regularization (Overview)
Toy Example for Intuition
Humans are context dependent
Regional Tree Regularization
Example: Three Kinds of Interpretability
MIMIC III Dataset
Evaluation Metrics
Results on MIMIC III
A second application: treatment for HIV
Distilled Decision Tree
Caveats and Gotchas
Regularizing for Interpretability
Taught by
Stanford MedAI
Tags
Related Courses
Downstream ProcessingIndian Institute of Technology Madras via Swayam Chemical Process Intensification
Indian Institute of Technology Guwahati via Swayam Thermal Operations in Food Process Engineering: Theory and Applications
Indian Institute of Technology, Kharagpur via Swayam Cannabis Processing
Doane University via edX Principles and Practices of Process Equipment and Plant Design
Indian Institute of Technology, Kharagpur via Swayam