Uncovering and Inducing Interpretable Causal Structure in Deep Learning Models
Offered By: Valence Labs via YouTube
Course Description
Overview
Explore a comprehensive lecture on uncovering and inducing interpretable causal structure in deep learning models. Delve into the theory of causal abstraction as a foundation for creating faithful and interpretable explanations of AI model behavior. Learn about two approaches: analysis mode, which uses interventions on model-internal states to uncover causal structure, and training mode, which induces interpretable causal structure through interventions during model training. Examine case studies demonstrating these techniques applied to deep learning models processing language and images. The talk covers key concepts including causal abstraction, interchange interventions, and distributed alignment search, providing insights into creating more transparent and understandable AI systems.
Syllabus
- Discussant Slide
- Introduction
- Causal Abstraction
- Interchange Interventions
- Distributed Alignment Search
Taught by
Valence Labs
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX