Interpretable Representation Learning for Visual Intelligence
Offered By: Bolei Zhou via YouTube
Course Description
Overview
Explore a comprehensive thesis defense presentation on interpretable representation learning for visual intelligence. Delve into deep neural networks for object classification, network visualization techniques, and interpretable representations for objects and scenes. Learn about class activation mapping for explaining deep neural network predictions, weakly-supervised localization, and temporal relational networks for event recognition. Gain insights into the interpretability of medical models and understand the contributions made to the field of visual intelligence.
Syllabus
Intro
Deep Neural Networks for Object Classification
Interpretability of Deep Neural Networks
Thesis Outline
Object Classification vs. Scene Recognition
Visualizing Units
Related Work on Network Visualization
Annotating the Interpretation of Units
Interpretable Representations for Objects and Scenes
Evaluate Unit for Semantic Segmentation
IMAGENET Pretrained Network
Class Activation Mapping: Explain Prediction of Deep Neural Network
Evaluation on Weakly-Supervised Localization
Explaining the Failure Cases in Video
Interpreting Medical Models
Summary of Contributions
Temporal Relational Networks for Event Recognition
Acknowledgement
Taught by
Bolei Zhou
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX