Fine-Tuning Vision Transformer Classifier for EyePacs Dataset Quality Model - Part 1
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the world of fine-tuning Vision Transformers (ViT) with custom datasets in this 56-minute video tutorial, the first in a series of four. Learn how to leverage a pre-trained model by Google, initially trained on the ImageNet 21k dataset, and fine-tune it using the EyeQ Dataset for quality assessment purposes. Explore the EyeQ Dataset, a subset of the EyePacs Dataset originally used in the Diabetic Retinopathy Detection Kaggle Competition. Follow along with practical demonstrations and access the accompanying notebooks on GitHub to enhance your understanding of machine learning techniques for image classification tasks.
Syllabus
LLMOPS :Fine Tune ViT classifier EyePacs Dataset. Create and FineTune Quality Model #machinelerning
Taught by
The Machine Learning Engineer
Related Courses
Amazon SageMaker JumpStart Foundations (Japanese)Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models
Amazon Web Services via AWS Skill Builder AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps)
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Automate Fine-Tuning of an LLM
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Fine-Tune a Base Model with RLHF
Amazon Web Services via AWS Skill Builder