Fine-Tuning Vision Transformer for Diabetic Retinopathy Detection - Part 2
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune a Vision Transformer (ViT) with a custom dataset in this 52-minute video tutorial, part of a 4-video series. Explore the process of using a pre-trained model by Google, initially trained on the ImageNet 21k dataset, and fine-tuning it with the EyeQ Dataset for Diabetic Retinopathy (DR) detection. Discover how to leverage the EyeQ Dataset, a subset of the EyePacs Dataset originally used in the Diabetic Retinopathy Detection Kaggle Competition. Access accompanying notebooks on GitHub to follow along and implement the techniques demonstrated in the video.
Syllabus
LLMOPS :Fine Tune ViT classifier with retina Images. Detection Model #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Advanced PyTorch Techniques and ApplicationsPackt via Coursera Preprocessing Unstructured Data for LLM Applications
DeepLearning.AI via Coursera Automatic Image Captioning with Vision Transformer and GPT-2
Eran Feit via YouTube Tutorial on Vision Transformers - Tutorial 3
MICDE University of Michigan via YouTube Image Captioning Python App with ViT and GPT2 Using Hugging Face Models - Applied Deep Learning
1littlecoder via YouTube