Identifying Representations for Intervention Extrapolation
Offered By: Valence Labs via YouTube
Course Description
Overview
Explore a 33-minute conference talk on identifying representations for intervention extrapolation presented by Sorawit (James) Saengkyongam from Valence Labs. Delve into the concept of identifiable and causal representation learning for improving generalizability and robustness in machine learning. Examine the task of intervention extrapolation, which involves predicting the effects of unseen interventions on outcomes. Learn about the setup involving outcome Y, observed features X, latent features Z, and exogenous action variables A. Discover how identifiable representations can provide effective solutions for non-linear intervention effects. Understand the Rep4Ex approach, which combines intervention extrapolation with identifiable representation learning. Explore the theoretical findings on identifiability and the proposed method for enforcing linear invariance constraints. Follow along as the speaker validates the theoretical findings through synthetic experiments and demonstrates the success of the approach in predicting unseen intervention effects. Engage with the Q&A session to gain further insights into this cutting-edge research in causal representation learning.
Syllabus
- Introduction
- Intervention Extrapolation with Observed Z
- Intervention Extrapolation via Identifiable Representations
- Identification of the Unmixing Function
- Simulations
- Q+A
Taught by
Valence Labs
Related Courses
From Graph to Knowledge Graph – Algorithms and ApplicationsMicrosoft via edX Social Network Analysis
Indraprastha Institute of Information Technology Delhi via Swayam Stanford Seminar - Representation Learning for Autonomous Robots, Anima Anandkumar
Stanford University via YouTube Unsupervised Brain Models - How Does Deep Learning Inform Neuroscience?
Yannic Kilcher via YouTube Emerging Properties in Self-Supervised Vision Transformers - Facebook AI Research Explained
Yannic Kilcher via YouTube