LLMOps: Fine-Tuning Video Classifier (ViViT) with Custom Data
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune a Video Vision Transformer (ViViT) using your own dataset in this comprehensive 44-minute tutorial. Explore the process of leveraging a pretrained model by Google (google/vivit-b-16x2-kinetics400), initially trained on the Kinetics-400 dataset, and adapt it to classify videos from a different dataset. Gain hands-on experience in implementing LLMOps techniques for machine learning and data science applications. Access the accompanying code repository on GitHub to follow along and enhance your skills in video classification using state-of-the-art transformer models.
Syllabus
LLMOps: Fine Tune Video Classifier (ViViT ) with your own data #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent