YoVDO

Image Captioning Python App with ViT and GPT2 Using Hugging Face Models - Applied Deep Learning

Offered By: 1littlecoder via YouTube

Tags

Image Captioning Courses Deep Learning Courses Python Courses GPT-2 Courses Model Deployment Courses Gradio Courses Hugging Face Courses Vision Transformers Courses

Course Description

Overview

Learn to create an image captioning Python application using Vision Transformer (ViT) and GPT-2 models from Hugging Face. Follow along as the tutorial guides you through building a Gradio app that generates descriptive captions for images. Explore the integration of Sachin's pre-trained model from the Hugging Face Model Hub, which combines ViT for image processing and GPT-2 for text generation. By the end of this 25-minute tutorial, deploy your own image captioning app on the Hugging Face Model Hub, gaining practical experience in applied deep learning and natural language processing.

Syllabus

Build Image Captioning Python App with ViT & GPT2 using Hugging Face Models | Applied Deep Learning


Taught by

1littlecoder

Related Courses

Vision Transformers Explained + Fine-Tuning in Python
James Briggs via YouTube
ConvNeXt- A ConvNet for the 2020s - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Do Vision Transformers See Like Convolutional Neural Networks - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Stable Diffusion and Friends - High-Resolution Image Synthesis via Two-Stage Generative Models
HuggingFace via YouTube
Intro to Dense Vectors for NLP and Vision
James Briggs via YouTube