Text to Image AI Models - Different Methodologies and How It Works
Offered By: Prodramp via YouTube
Course Description
Overview
Explore various text-to-image generation AI methodologies and their inner workings in this 18-minute video tutorial. Learn about four different methods: Autoregressive models, GANs, VQ-VAE Transformers, and Diffusion models. Discover how each approach works, including GANs' introduction, VQ-VAE's DALL-E mini/mega and ruDALL-E models, and Diffusion models' technology. Examine specific implementations like GLIDE, DALL-E 2, and Google's Imagen. Gain insights into Google Pathway Models and access GitHub resources for further exploration. Understand the evolution of text-to-image AI, from early successes to advanced systems like DALL-E 2 and Google Imagen, which demonstrate impressive capabilities in generating images from text descriptions.
Syllabus
- Content Intro
- 4 Different Methods
- Our Objective
- Text to Image Generation Methods
- Autoregressive Models
- GANs
- GANs Introduction
- VQ-VAE Transformers
- VQ-VAE - DALL-E mini/mega Models
- VQ-VAE - ruDALL-E Models
- Diffusion Models
- Diffusion Models Technology
- Diffusion Models - GLIDE by Open AI
- Diffusion Models - DALL-E 2 by Open AI
- Diffusion Models - Imagen by Google
- Google Pathway Models
- GitHub Resources
- Conclusion
Taught by
Prodramp
Related Courses
6.S191: Introduction to Deep LearningMassachusetts Institute of Technology via Independent Generate Synthetic Images with DCGANs in Keras
Coursera Project Network via Coursera Image Compression and Generation using Variational Autoencoders in Python
Coursera Project Network via Coursera Build Basic Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera Apply Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera