YoVDO

Interpreting Deep Neural Networks Towards Trustworthiness

Offered By: Institut des Hautes Etudes Scientifiques (IHES) via YouTube

Tags

Deep Neural Networks Courses Data Science Courses Machine Learning Courses Cosmology Courses Cell Biology Courses Interpretability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of interpreting deep neural networks for enhanced trustworthiness in this 33-minute lecture by Bin Yu from Berkeley University, presented at the Institut des Hautes Etudes Scientifiques (IHES). Delve into the contextual decomposition (CD) method, which attributes importance to features and feature interactions for individual predictions. Discover how applying CD to interpret deep learning models in cosmology led to the development of the adaptive wavelet distillation (AWD) interpretation method. Learn about AWD's superior performance compared to deep neural networks and its interpretability in both cosmology and cell biology applications. Gain insights into the importance of quality control throughout the entire data science life cycle to build models for trustworthy interpretation.

Syllabus

Bin Yu - Interpreting Deep Neural Networks towards Trustworthiness


Taught by

Institut des Hautes Etudes Scientifiques (IHES)

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent