Interpretability for Deep Learning: Theory, Applications and Scientific Insights
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a comprehensive lecture on deep learning interpretability presented by Oliver Eberle from Technische Universität Berlin at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the importance of understanding complex decision strategies in deep learning models and the field of Explainable AI. Discover methods for improving transparency, safety, and trustworthiness in model deployment. Gain insights into techniques for revealing higher-order interactions and undesired model behavior. Learn about practical applications of these interpretability tools in scientific discovery, including early modern history of science, human alignment with language models, and histopathology. This 56-minute talk offers a thorough examination of the theory, applications, and scientific insights derived from deep learning interpretability.
Syllabus
Oliver Eberle - Interpretability for Deep Learning: Theory, Applications and Scientific Insights
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube