CPU Inference with ViT ONNX Model in Azure ML Managed Endpoint - AKS
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to perform CPU inference on Azure Kubernetes Service (AKS) by creating a Managed Endpoint in Azure Machine Learning Studio. Explore the process of converting a Vision Transformer (ViT) model to ONNX format and utilizing onnxruntime with Python Azure ML SDK v2. This 49-minute video tutorial guides you through the steps of setting up and deploying a machine learning model for efficient inference in a cloud environment, demonstrating essential MLOps practices for data scientists and machine learning engineers.
Syllabus
MLOPS: CPU Inference ViT ONNX Model in Azure ML Managed EndPoint (AKS )#machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Caffe2: Getting StartedPluralsight Despliegue de modelos de IA en IoT Edge con ONNX
Coursera Project Network via Coursera Flux - The Elegant Machine Learning Library for Julia
The Julia Programming Language via YouTube How to Convert Almost Any PyTorch Model to ONNX and Serve It Using Flask
Abhishek Thakur via YouTube Productionizing Machine Learning with Apache Spark, MLflow and ONNX - Cloud Deployment Using SQL Server
Databricks via YouTube