Fine-Tuning Vision Transformer for Diabetic Retinopathy Detection - Part 2
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune a Vision Transformer (ViT) with a custom dataset in this 52-minute video tutorial, part of a 4-video series. Explore the process of using a pre-trained model by Google, initially trained on the ImageNet 21k dataset, and fine-tuning it with the EyeQ Dataset for Diabetic Retinopathy (DR) detection. Discover how to leverage the EyeQ Dataset, a subset of the EyePacs Dataset originally used in the Diabetic Retinopathy Detection Kaggle Competition. Access accompanying notebooks on GitHub to follow along and implement the techniques demonstrated in the video.
Syllabus
LLMOPS :Fine Tune ViT classifier with retina Images. Detection Model #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Vision Transformers Explained + Fine-Tuning in PythonJames Briggs via YouTube ConvNeXt- A ConvNet for the 2020s - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Do Vision Transformers See Like Convolutional Neural Networks - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Stable Diffusion and Friends - High-Resolution Image Synthesis via Two-Stage Generative Models
HuggingFace via YouTube Intro to Dense Vectors for NLP and Vision
James Briggs via YouTube