YoVDO

Understanding Deep Neural Networks - CVPR 2020 iMLCV Tutorial

Offered By: Bolei Zhou via YouTube

Tags

Deep Learning Courses Computer Vision Courses Neural Networks Courses Self-supervised Learning Courses Interpretability Courses

Course Description

Overview

Explore deep neural network understanding in this CVPR'20 iMLCV tutorial video. Delve into applications of deep learning, research themes, and perturbation approaches. Learn about meaningful and extremal perturbations for interpretability, examining foreground evidence and background suppression effects. Discover techniques for adversarial defense, including regularization and smooth masks. Compare with prior work, focusing on weak localization performance and output class selectivity. Investigate intermediate activations, spatial and channel attribution, and activation "diffing". Analyze concepts per filter and filters per concept in self-supervised learning. Examine concept embedding spaces for segmentation and classification tasks. Gain insights into human-guided machine learning and future directions for model debugging.

Syllabus

Intro
Applications of Deep Learning
Research Themes
Prior Work: Perturbation Approaches
Our Approach: Meaningful Perturbations
Our Approach: Extremal Perturbations
Interpretability
Foreground evidence is usually sufficient
Suppressing the background may overdrive the network
Adversarial Defense
Regularization to mitigate artifacts
Area Constraint
Smooth Masks
Comparison with Prior Work
Measure Performance on Weak Localization
Selectivity to Output Class
Sensitive to Model Parameters
Intermediate Activations
Spatial Attribution
Channel Attribution
Activation "Diffing"
# Concepts per Filter
# Filters per Concept
Self-Supervised Learning
Comparing Concept Embedding Spaces
Segmentation
Classification
Human-Guided Machine Learning
Future Work: Model Debugging


Taught by

Bolei Zhou

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube