Deep Generative Models and Stable Diffusion: Revolution in Visual Synthesis - Lecture by Björn Ommer
Offered By: Lennart Svensson via YouTube
Course Description
Overview
Explore the revolutionary world of deep generative models and visual synthesis in this guest lecture by Professor Björn Ommer. Delve into an overview of generative models, with a focus on denoising diffusion probabilistic models and their role in visual synthesis. Learn about the groundbreaking Stable Diffusion approach, which significantly improves the efficiency of diffusion models, allowing for the summarization of billions of training samples in compact representations. Discover how these advancements enable visual synthesis on consumer GPUs and gain insights into current extensions and future developments in the field. Examine the potential applications and limitations of these cutting-edge technologies in various domains, including digital humanities and life sciences.
Syllabus
Introduction
Overview of generative models
Diffusion models
Stable diffusion
Retrieval-Augmented Diffusion Models
Taught by
Lennart Svensson
Related Courses
Machine Learning with Graphs - Fall 2019Stanford University via YouTube Topographic VAEs Learn Equivariant Capsules - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube Generative Models With Domain Knowledge for Weakly Supervised Clustering
Stanford University via YouTube Toward Brain Computer Interface - Deep Generative Models for Brain Reading
MITCBMM via YouTube Deep Generative Models for Speech and Images
MITCBMM via YouTube