Interpretable Representation Learning for Visual Intelligence
Offered By: Bolei Zhou via YouTube
Course Description
Overview
Explore a comprehensive thesis defense presentation on interpretable representation learning for visual intelligence. Delve into deep neural networks for object classification, network visualization techniques, and interpretable representations for objects and scenes. Learn about class activation mapping for explaining deep neural network predictions, weakly-supervised localization, and temporal relational networks for event recognition. Gain insights into the interpretability of medical models and understand the contributions made to the field of visual intelligence.
Syllabus
Intro
Deep Neural Networks for Object Classification
Interpretability of Deep Neural Networks
Thesis Outline
Object Classification vs. Scene Recognition
Visualizing Units
Related Work on Network Visualization
Annotating the Interpretation of Units
Interpretable Representations for Objects and Scenes
Evaluate Unit for Semantic Segmentation
IMAGENET Pretrained Network
Class Activation Mapping: Explain Prediction of Deep Neural Network
Evaluation on Weakly-Supervised Localization
Explaining the Failure Cases in Video
Interpreting Medical Models
Summary of Contributions
Temporal Relational Networks for Event Recognition
Acknowledgement
Taught by
Bolei Zhou
Related Courses
Computer Vision For iOS Developers CourseUdemy Image Processing With Python
YouTube Semantic Segmentation Explained (Traditional Chinese)
Amazon Web Services via AWS Skill Builder PyTorch Image Segmentation Tutorial with U-NET - Everything From Scratch
Aladdin Persson via YouTube Convolutional Neural Networks
Alexander Amini via YouTube