Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Offered By: Andreas Geiger via YouTube
Course Description
Overview
Explore a keynote presentation on unsupervised learning of generative models for 3D controllable image synthesis. Delve into the potential of replacing traditional rendering pipelines with efficient, image-learned models. Examine the challenges of disentangling 3D properties in 2D domains and the lack of interpretable, controllable representations in current image synthesis models. Discover an approach that reasons in both 3D space and 2D image domains to tackle 3D controllable image synthesis. Learn about a model that unsupervisedly disentangles latent 3D factors from raw images, enabling consistent novel scene synthesis. Follow the presentation's structure, covering introduction, paradigms, current SLAM systems, goals, problems, generative models, model overview, loss functions, results, baselines, full model implementation, flying furniture examples, video representation, failure cases, improved 3D representation, and future work directions.
Syllabus
Introduction
Two dominating paradigms
Current Slam systems
Goals
Problem
Current Generative Models
Model Overview
Loss Functions
Results
Baseline
Full Model
Flying Furniture
Video Representation
Failure Cases
Better 3D Representation
Overview
Future Work
Keeley 360
Conclusion
Taught by
Andreas Geiger
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity