Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Offered By: University of Central Florida via YouTube
Course Description
Overview
Explore the principles of Explainable AI in this 32-minute lecture from the University of Central Florida's CAP6412 course. Delve into the challenges of interpreting deep learning models, examining alternative explanation techniques beyond Taylor Decomposition. Learn about the four key properties of effective explanation methods and understand the Layer-wise Relevance Propagation (LRP) rules for deep rectifier networks. Discover how to implement LRP efficiently and its connection to Deep Taylor Decomposition. Analyze the properties of explanations and explore rule choices using the VGG-16 network. Gain valuable insights into the importance of explainability in AI and its implications for various applications.
Syllabus
Introduction
Explainable Machine Learning
Problems with Taylor Decomposition
Alternative Explanation Techniques
Four Properties of Good Explanation Techniques
LRP Rules for Deep Rectifier Networks
LRP Rules: LRP-O
Implementing LRP Efficiently
LRP as a Deep Taylor Decomposition (ii)
Properties of Explanations (ii)
Rule Choices with VGG-16
Conclusion
Against
Questions?
Taught by
UCF CRCV
Tags
Related Courses
Explainable AI: Scene Classification and GradCam VisualizationCoursera Project Network via Coursera Artificial Intelligence Privacy and Convenience
LearnQuest via Coursera Natural Language Processing and Capstone Assignment
University of California, Irvine via Coursera Modern Artificial Intelligence Masterclass: Build 6 Projects
Udemy Data Science for Business
DataCamp