VQ-GAN - Taming Transformers for High-Resolution Image Synthesis - Paper Explained
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Explore a comprehensive video explanation of the VQ-GAN (Vector Quantized Generative Adversarial Network) paper, focusing on high-resolution image synthesis using transformers. Dive into key modifications of VQ-VAE, including perceptual loss and adversarial loss for crisper outputs. Learn about sequence prediction with GPT, generating high-resolution images, and in-depth loss explanations. Discover transformer training techniques, conditioning methods, and various sampling strategies. Compare results with other models, including DALL-E, and understand the effects of receptive fields on image generation.
Syllabus
Intro
A high-level VQ-GAN overview
Perceptual loss
Patch-based adversarial loss
Sequence prediction via GPT
Generating high-res images
Loss explained in depth
Training the transformer
Conditioning transformer
Comparisons and results
Sampling strategies
Comparisons and results continued
Rejection sampling with ResNet or CLIP
Receptive field effects
Comparisons with DALL-E
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Apply Generative Adversarial Networks (GANs)DeepLearning.AI via Coursera Build Basic Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera Build Better Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera Building your first GAN in Python
Coursera Project Network via Coursera Generative AI for Data Science with Copilot
Microsoft via Coursera