YoVDO

Uncovering and Inducing Interpretable Causal Structure in Deep Learning Models

Offered By: Valence Labs via YouTube

Tags

Interpretability Courses Artificial Intelligence Courses Machine Learning Courses Deep Learning Courses Computer Vision Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on uncovering and inducing interpretable causal structure in deep learning models. Delve into the theory of causal abstraction as a foundation for creating faithful and interpretable explanations of AI model behavior. Learn about two approaches: analysis mode, which uses interventions on model-internal states to uncover causal structure, and training mode, which induces interpretable causal structure through interventions during model training. Examine case studies demonstrating these techniques applied to deep learning models processing language and images. The talk covers key concepts including causal abstraction, interchange interventions, and distributed alignment search, providing insights into creating more transparent and understandable AI systems.

Syllabus

- Discussant Slide
- Introduction
- Causal Abstraction
- Interchange Interventions
- Distributed Alignment Search


Taught by

Valence Labs

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent