OpenAI GLIDE - Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Explore OpenAI's GLIDE model for photorealistic image generation and editing in this comprehensive video lecture. Delve into the combination of diffusion models and transformers that outperforms the older DALL-E model. Learn about diffusion models in-depth, including VAE-inspired loss, guided diffusion, and classifier-free guidance. Examine the GLIDE pipeline, CLIP guidance, and comparisons with other models. Gain insights into inpainting techniques, safety considerations, and potential failure cases. Access additional resources, including research papers and blog posts, to further your understanding of diffusion models and their applications in AI-powered image generation.
Syllabus
Intro to GLIDE - results
Intro to diffusion models
Inpainting and other awesome results
Diffusion models in depth
VAE inspired loss
GLIDE pipeline diffusion + transformers
Guided diffusion
Classifier-free guidance
CLIP guidance
Comparison with other models
Safety considerations
Failure cases
Outro
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Diffusion Models Beat GANs on Image Synthesis - Machine Learning Research Paper ExplainedYannic Kilcher via YouTube Diffusion Models Beat GANs on Image Synthesis - ML Coding Series - Part 2
Aleksa Gordić - The AI Epiphany via YouTube Food for Diffusion
HuggingFace via YouTube Imagen: Text-to-Image Generation Using Diffusion Models - Lecture 9
University of Central Florida via YouTube Denoising Diffusion-Based Generative Modeling
Open Data Science via YouTube