YoVDO

ModelMesh: Scalable AI Model Serving on Kubernetes

Offered By: Linux Foundation via YouTube

Tags

Laravel Courses Web Development Courses Kubernetes Courses Microservices Courses Prometheus Courses Scalability Courses KServe Courses Distributed Caching Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the scalable deployment of AI models on Kubernetes using ModelMesh, the multi-model serving backend for KServe, in this conference talk. Learn how to overcome resource limitations and efficiently manage numerous models at scale. Discover ModelMesh's distributed LRU cache for intelligent model loading and unloading, as well as its routing capabilities for balancing inference requests. Gain insights into the latest major release (v0.10) and its integration with KServe. Understand the advantages of ModelMesh's small control-plane footprint and its ability to host multiple models while maximizing cluster resources and minimizing costs. Explore newly-supported model runtimes like TorchServe and the capability for runtime-sharing across namespaces. Dive into the ModelMesh architecture, monitoring techniques using Prometheus, and practical examples of custom runtimes and inference services through a live demonstration.

Syllabus

Introduction
Outline
What is Model Serving
Deploying Models as Microservices
Project Kserv
Pod Per Model Paradigm
Model Mesh
Model Mesh Architecture
Model Mesh Architecture Overview
Monitoring Model Mesh
Prometheus
ModelMesh Dashboard
Cache Miss Rate
Serving Runtimes
Example
Custom Runtime
Inference Service
Demo
Contact Information
Questions


Taught by

Linux Foundation

Tags

Related Courses

AIOps Essentials (Autoscaling Kubernetes with Prometheus Metrics)
A Cloud Guru
DevOps Monitoring Deep Dive
A Cloud Guru
Learn Docker by Doing
A Cloud Guru
LPI DevOps Tools Engineer Certification
A Cloud Guru
Monitoring Kubernetes With Prometheus
A Cloud Guru