VirTex- Learning Visual Representations from Textual Annotations
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a detailed explanation of the VirTex paper, which introduces a novel approach to visual transfer learning using textual annotations. Dive into the methodology of pre-training convolutional neural networks from scratch using high-quality image captions, and discover how this technique compares to traditional supervised and unsupervised pre-training methods. Learn about the quality-quantity tradeoff in visual representation learning, the image captioning task, and the VirTex method's implementation. Examine the results of linear classification, ablation studies, fine-tuning experiments, and attention visualization. Gain insights into how this approach achieves comparable or superior performance to ImageNet-based pre-training while using significantly fewer images, potentially revolutionizing visual transfer learning for various computer vision tasks.
Syllabus
- Intro & Overview
- Pre-Training for Visual Tasks
- Quality-Quantity Tradeoff
- Image Captioning
- VirTex Method
- Linear Classification
- Ablations
- Fine-Tuning
- Attention Visualization
- Conclusion & Remarks
Taught by
Yannic Kilcher
Related Courses
Deep Learning For Visual ComputingIndian Institute of Technology, Kharagpur via Swayam Literacy Essentials: Core Concepts Generative Adversarial Network
Pluralsight Machine Learning & Deep Learning Projects
The AI University via YouTube Implement Image Captioning with Recurrent Neural Networks
Pluralsight Tensor2Tensor - TensorFlow at O’Reilly AI Conference, San Francisco '18
TensorFlow via YouTube