Image GPT- Generative Pretraining from Pixels - Paper Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive video analysis of the paper "Generative Pretraining from Pixels" by OpenAI researchers. Delve into the application of generative model principles from natural language processing to image processing. Learn about the innovative approach of using a sequence Transformer to predict pixels auto-regressively, without relying on 2D input structure knowledge. Discover how this method, trained on low-resolution ImageNet data without labels, achieves remarkable results in image representation learning. Examine the model's performance in linear probing, fine-tuning, and low-data classification tasks, including its competitive accuracy on CIFAR-10 and ImageNet benchmarks. Follow the detailed breakdown of the model architecture, experimental results, and their implications for the field of computer vision and unsupervised learning.
Syllabus
- Intro & Overview
- Generative Models for Pretraining
- Pretraining for Visual Tasks
- Model Architecture
- Linear Probe Experiments
- Fine-Tuning Experiments
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Introduction to Computational Arts: ProcessingState University of New York via Coursera Generative Art and Computational Creativity
Simon Fraser University via Kadenze Advanced Generative Art and Computational Creativity
Simon Fraser University via Kadenze Generative Art and Computational Creativity
Simon Fraser University via Kadenze Programming Graphics I: Introduction to Generative Art
Skillshare