LLMOps: Fine-Tuning Video Classifier (ViViT) with Custom Data
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune a Video Vision Transformer (ViViT) using your own dataset in this comprehensive 44-minute tutorial. Explore the process of leveraging a pretrained model by Google (google/vivit-b-16x2-kinetics400), initially trained on the Kinetics-400 dataset, and adapt it to classify videos from a different dataset. Gain hands-on experience in implementing LLMOps techniques for machine learning and data science applications. Access the accompanying code repository on GitHub to follow along and enhance your skills in video classification using state-of-the-art transformer models.
Syllabus
LLMOps: Fine Tune Video Classifier (ViViT ) with your own data #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX