DALL-E - Zero-Shot Text-to-Image Generation - Paper Explained
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Dive into a comprehensive video explanation of OpenAI's DALL-E paper on zero-shot text-to-image generation. Explore the two-stage process involving VQ-VAE and autoregressive transformers, understand ELBO concepts, and discover how the model combines distinct concepts to create plausible images. Learn about engineering challenges, automatic filtering using CLIP, and witness impressive results including image-to-image translation capabilities. Gain insights into this groundbreaking AI technology through detailed explanations and visual examples.
Syllabus
What is DALL-E?
VQ-VAE blur problems
transformers, transformers, transformers!
Stage 1 and Stage 2 explained
Stage 1 VQ-VAE recap
Stage 2 autoregressive transformer
Some notes on ELBO
VQ-VAE modifications
Stage 2 in-depth
Results
Engineering, engineering, engineering
Automatic filtering via CLIP
More results
Additional image to image translation examples
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Deep Learning – Part 2Indian Institute of Technology Madras via Swayam Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera Introduction to Deep Learning
Massachusetts Institute of Technology via YouTube Spatial Computational Thinking
National University of Singapore via edX MIT EI Seminar - Phillip Isola - Emergent Intelligence- Getting More Out of Agents Than You Bake In
Massachusetts Institute of Technology via YouTube