LLMOps: Fine-Tuning Video Classifier (ViViT) with Custom Data
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune a Video Vision Transformer (ViViT) using your own dataset in this comprehensive 44-minute tutorial. Explore the process of leveraging a pretrained model by Google (google/vivit-b-16x2-kinetics400), initially trained on the Kinetics-400 dataset, and adapt it to classify videos from a different dataset. Gain hands-on experience in implementing LLMOps techniques for machine learning and data science applications. Access the accompanying code repository on GitHub to follow along and enhance your skills in video classification using state-of-the-art transformer models.
Syllabus
LLMOps: Fine Tune Video Classifier (ViViT ) with your own data #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube