VQ-GAN - Taming Transformers for High-Resolution Image Synthesis - Paper Explained
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Explore a comprehensive video explanation of the VQ-GAN (Vector Quantized Generative Adversarial Network) paper, focusing on high-resolution image synthesis using transformers. Dive into key modifications of VQ-VAE, including perceptual loss and adversarial loss for crisper outputs. Learn about sequence prediction with GPT, generating high-resolution images, and in-depth loss explanations. Discover transformer training techniques, conditioning methods, and various sampling strategies. Compare results with other models, including DALL-E, and understand the effects of receptive fields on image generation.
Syllabus
Intro
A high-level VQ-GAN overview
Perceptual loss
Patch-based adversarial loss
Sequence prediction via GPT
Generating high-res images
Loss explained in depth
Training the transformer
Conditioning transformer
Comparisons and results
Sampling strategies
Comparisons and results continued
Rejection sampling with ResNet or CLIP
Receptive field effects
Comparisons with DALL-E
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Artificial CreativityParsons School of Design via Coursera Building Language Models on AWS (Japanese)
Amazon Web Services via AWS Skill Builder Deep Learning NLP: Training GPT-2 from scratch
Coursera Project Network via Coursera Generating New Recipes using GPT-2
Coursera Project Network via Coursera Accelerating High-Performance Machine Learning at Scale in Kubernetes
CNCF [Cloud Native Computing Foundation] via YouTube