Domain Adaptation with Invariant Representation Learning - What Transformations to Learn?
Offered By: Stanford University via YouTube
Course Description
Overview
Explore domain adaptation techniques for invariant representation learning in this Stanford University lecture. Delve into the challenges of unsupervised domain adaptation and learn why fixed mappings across domains may be insufficient. Discover an efficient method that incorporates domain-specific information to generate optimal representations for classification. Examine the importance of minimal changes in causal mechanisms across domains and how this approach preserves valuable information. Follow along as the speaker presents synthetic and real-world data experiments demonstrating the effectiveness of the proposed technique. Gain insights into transfer learning, causal discovery, and their applications in computational biology and cancer research.
Syllabus
Introduction
Motivation
Why dont they work
Conditional Target Shift
Neural Network Setup
Minimize Jenkins Shannon Divergence
adversarial training
translation
optimization
Contrastive training
Simulation
Datasets
Results
Future work
Taught by
Stanford MedAI
Tags
Related Courses
Structuring Machine Learning ProjectsDeepLearning.AI via Coursera Natural Language Processing on Google Cloud
Google Cloud via Coursera Introduction to Learning Transfer and Life Long Learning (3L)
University of California, Irvine via Coursera Advanced Deployment Scenarios with TensorFlow
DeepLearning.AI via Coursera Neural Style Transfer with TensorFlow
Coursera Project Network via Coursera