OpenAI GLIDE - Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Explore OpenAI's GLIDE model for photorealistic image generation and editing in this comprehensive video lecture. Delve into the combination of diffusion models and transformers that outperforms the older DALL-E model. Learn about diffusion models in-depth, including VAE-inspired loss, guided diffusion, and classifier-free guidance. Examine the GLIDE pipeline, CLIP guidance, and comparisons with other models. Gain insights into inpainting techniques, safety considerations, and potential failure cases. Access additional resources, including research papers and blog posts, to further your understanding of diffusion models and their applications in AI-powered image generation.
Syllabus
Intro to GLIDE - results
Intro to diffusion models
Inpainting and other awesome results
Diffusion models in depth
VAE inspired loss
GLIDE pipeline diffusion + transformers
Guided diffusion
Classifier-free guidance
CLIP guidance
Comparison with other models
Safety considerations
Failure cases
Outro
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologías del lenguaje humano
Universidad de Alicante via Miríadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam