GLIDE- Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore the groundbreaking GLIDE model for text-to-image generation in this comprehensive video lecture. Delve into the mechanics of diffusion models and their application in creating photorealistic images from text descriptions. Learn about conditional generation techniques, guided diffusion, and the architecture behind GLIDE. Examine training methodologies, result metrics, and potential failure cases. Gain insights into safety considerations surrounding this powerful technology. Discover how GLIDE compares to other models like DALL-E and understand its implications for text-driven image editing and inpainting.
Syllabus
- Intro & Overview
- What is a Diffusion Model?
- Conditional Generation and Guided Diffusion
- Architecture Recap
- Training & Result metrics
- Failure cases & my own results
- Safety considerations
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity