MLOps: OpenVino Toolkit - Compress and Quantize YOLO Model
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to convert a YoloV10 model to OpenVino IR format and quantize it to int8 using the nnvc library from OpenVino. Explore the process of model compression and quantization for improved performance and efficiency. Follow along with practical examples of inference on CPU using both YoloV10 and YoloV8 versions. Gain hands-on experience in MLOps techniques for optimizing deep learning models, specifically focusing on YOLO architectures. Access the accompanying notebook on GitHub to practice and implement the demonstrated techniques in your own projects.
Syllabus
MLOps: OpenVino Toolkit Compress and Quantize YOLO Model #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity