YoVDO

Diffusion Models Beat GANs on Image Synthesis - ML Coding Series - Part 2

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

Generative Adversarial Networks (GAN) Courses Machine Learning Courses Image Synthesis Courses Sampling Courses Diffusion Models Courses

Course Description

Overview

Dive into an in-depth video tutorial exploring the paper "Diffusion Models Beat GANs on Image Synthesis" and its accompanying code. Learn about U-Net architecture improvements, classifier guidance, and the intuition behind these concepts. Explore the training process for noise-aware classifiers, visualize timestep conditioning, and understand the core sampling logic. Discover how to implement classifier guidance, including the mean shift method, and gain insights into the trade-offs between diversity and quality in image synthesis. Examine a minor bug in the original code and follow along with practical coding examples throughout this comprehensive machine learning session.

Syllabus

Intro
Paper overview part - U-Net architecture improvements
Classifier guidance explained
Intuition behind classifier guidance
Scaling classifier guidance
Diversity vs quality tradeoff and future work
Coding part - training a noise-aware classifier
Main training loop
Visualizing timestep conditioning
Sampling using classifier guidance
Core of the sampling logic
Shifting the mean - classifier guidance
Minor bug in their code and my GitHub issue
Outro


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

Diffusion Models Beat GANs on Image Synthesis - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube
OpenAI GLIDE - Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Aleksa Gordić - The AI Epiphany via YouTube
Food for Diffusion
HuggingFace via YouTube
Imagen: Text-to-Image Generation Using Diffusion Models - Lecture 9
University of Central Florida via YouTube
Denoising Diffusion-Based Generative Modeling
Open Data Science via YouTube