Interpreting Deep Neural Networks Towards Trustworthiness
Offered By: Institut des Hautes Etudes Scientifiques (IHES) via YouTube
Course Description
Overview
Explore the intricacies of interpreting deep neural networks for enhanced trustworthiness in this 33-minute lecture by Bin Yu from Berkeley University, presented at the Institut des Hautes Etudes Scientifiques (IHES). Delve into the contextual decomposition (CD) method, which attributes importance to features and feature interactions for individual predictions. Discover how applying CD to interpret deep learning models in cosmology led to the development of the adaptive wavelet distillation (AWD) interpretation method. Learn about AWD's superior performance compared to deep neural networks and its interpretability in both cosmology and cell biology applications. Gain insights into the importance of quality control throughout the entire data science life cycle to build models for trustworthy interpretation.
Syllabus
Bin Yu - Interpreting Deep Neural Networks towards Trustworthiness
Taught by
Institut des Hautes Etudes Scientifiques (IHES)
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube