CPU Inference with ViT ONNX Model in Azure ML Managed Endpoint - AKS
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to perform CPU inference on Azure Kubernetes Service (AKS) by creating a Managed Endpoint in Azure Machine Learning Studio. Explore the process of converting a Vision Transformer (ViT) model to ONNX format and utilizing onnxruntime with Python Azure ML SDK v2. This 49-minute video tutorial guides you through the steps of setting up and deploying a machine learning model for efficient inference in a cloud environment, demonstrating essential MLOps practices for data scientists and machine learning engineers.
Syllabus
MLOPS: CPU Inference ViT ONNX Model in Azure ML Managed EndPoint (AKS )#machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Applied Machine LearningMicrosoft via edX Delivering a Data Warehouse in the Cloud
Microsoft via edX Developing Big Data Solutions with Azure Machine Learning
Microsoft via edX DP-100: A-Z Machine Learning using Azure Machine Learning
Udemy Operationalizing Microsoft Azure AI Solutions
Pluralsight