Fine-Tuning ViT Classifier with Retina Images and Converting to ONNX - Video 3
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune a Vision Transformer (ViT) classifier using retinal images and convert it to ONNX format in this 12-minute video tutorial. Explore the process of adapting a pre-trained Google model, initially trained on the ImageNet 21k dataset, for Diabetic Retinopathy (DR) detection using the EyeQ dataset, a subset of the EyePacs dataset from the Kaggle Diabetic Retinopathy Detection competition. Follow along as the instructor demonstrates the fine-tuning process and guides you through converting the resulting model to ONNX format, enhancing its portability and deployment options. Access the accompanying Jupyter notebook on GitHub to practice and implement the techniques covered in this third installment of a four-part series on machine learning and data science.
Syllabus
LLMOPS :Fine Tune ViT Classifier with Retina Images. Convert to ONNX #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Vision Transformers Explained + Fine-Tuning in PythonJames Briggs via YouTube ConvNeXt- A ConvNet for the 2020s - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Do Vision Transformers See Like Convolutional Neural Networks - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Stable Diffusion and Friends - High-Resolution Image Synthesis via Two-Stage Generative Models
HuggingFace via YouTube Intro to Dense Vectors for NLP and Vision
James Briggs via YouTube