YoVDO

Creating a Custom Serving Runtime in KServe ModelMesh - Hands-On Experience

Offered By: Linux Foundation via YouTube

Tags

Machine Learning Courses Kubernetes Courses Grafana Courses Prometheus Courses Model Deployment Courses Containerization Courses KServe Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of creating a custom serving runtime in KServe ModelMesh to serve machine learning models in this 30-minute conference talk. Gain insights into ModelMesh key features, learn how to build a new container image supporting desired frameworks, and understand the deployment strategy. Discover the advantages of KServe and ModelMesh architecture, including monitoring capabilities with Prometheus and Grafana dashboards. Follow along with hands-on demonstrations of loading models in existing model servers and running predictions using custom serving runtimes. Delve into practical examples and step-by-step instructions for implementing ModelMesh in real-world scenarios.

Syllabus

Introduction
Agenda
What is ModelServing
Deployment Strategy
KServe
Pod Per Model
ModelMesh
ModelMesh Features
ModelMesh Architecture
Monitoring
Prometheus
Grafana Dashboard
Model Loading
Serving Runtime
Why KServe
Step by Step
Example
Model Mesh Example
In Practice


Taught by

Linux Foundation

Tags

Related Courses

Developing a Tabular Data Model
Microsoft via edX
Data Science in Action - Building a Predictive Churn Model
SAP Learning
Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera
Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera
Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera