YoVDO

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Offered By: Andreas Geiger via YouTube

Tags

Unsupervised Learning Courses Machine Learning Courses Computer Vision Courses

Course Description

Overview

Explore the concept of nonlinear disentanglement in natural data through temporal sparse coding in this 52-minute talk by Yash Sharma at the Tübingen seminar series of the Autonomous Vision Group. Delve into unsupervised representation learning techniques for disentangling underlying factors of variation in naturalistic videos. Examine the SlowVAE model, which leverages temporally sparse distributions to achieve disentanglement without assumptions on the number of changing factors. Learn about the proof of identifiability and the model's performance on benchmark datasets. Discover new video datasets with natural dynamics, Natural Sprites and KITTI Masks, introduced as benchmarks for disentanglement research. Gain insights into time contrastive learning, permutation contrastive learning, and the Slow Variational Autoencoder. Explore results on various datasets and consider open questions in the field of disentanglement in machine learning.

Syllabus

Intro
Overview
What is Disentanglement?
Disentanglement Methods
What about time?
Time Contrastive Learning (TCL)
Why does this work?
Permutation Contrastive Learning (PCL)
What about reality?
Identifiability Proof Intuition
Slow Variational Autoencoder (Slow VAE)
Disentanglement Lib
Results on DSprites
Results on KITTI Masks
Natural Sprites and KITTI Masks
PCL & Ada-GVAE
PCL Simulation
Open Questions


Taught by

Andreas Geiger

Related Courses

Machine Learning: Unsupervised Learning
Brown University via Udacity
Practical Predictive Analytics: Models and Methods
University of Washington via Coursera
Поиск структуры в данных
Moscow Institute of Physics and Technology via Coursera
Statistical Machine Learning
Carnegie Mellon University via Independent
FA17: Machine Learning
Georgia Institute of Technology via edX