Deep Generative Modeling - MIT 6.S191 Lecture 4
Offered By: Alexander Amini via YouTube
Course Description
Overview
Explore deep generative modeling in this comprehensive lecture from MIT's Introduction to Deep Learning course. Delve into the importance of generative models, latent variable models, and autoencoders. Learn about variational autoencoders, including priors on latent distributions, the reparameterization trick, and applications in latent perturbation, disentanglement, and debiasing. Discover generative adversarial networks (GANs), their intuitions, training processes, and recent advances. Examine CycleGAN for unpaired translation and get a sneak peek at diffusion models. Gain valuable insights into cutting-edge deep learning techniques through this in-depth, 56-minute presentation by lecturer Ava Amini.
Syllabus
- Introduction
- Why care about generative models?
- Latent variable models
- Autoencoders
- Variational autoencoders
- Priors on the latent distribution
- Reparameterization trick
- Latent perturbation and disentanglement
- Debiasing with VAEs
- Generative adversarial networks
- Intuitions behind GANs
- Training GANs
- GANs: Recent advances
- CycleGAN of unpaired translation
- Diffusion Model sneak peak
Taught by
https://www.youtube.com/@AAmini/videos
Related Courses
Topographic VAEs Learn Equivariant Capsules - Machine Learning Research Paper ExplainedYannic Kilcher via YouTube Deep Generative Modeling
Alexander Amini via YouTube Deep Generative Modeling
Alexander Amini via YouTube Deep Generative Modeling
Alexander Amini via YouTube Learning What We Know and Knowing What We Learn - Gaussian Process Priors for Neural Data Analysis
MITCBMM via YouTube