YoVDO

Serverless Machine Learning Model Inference on Kubernetes with KServe

Offered By: Devoxx via YouTube

Tags

Devoxx Courses Machine Learning Courses Kubernetes Courses Autoscaling Courses KServe Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore serverless machine learning model inference on Kubernetes using KServe in this 38-minute conference talk from Devoxx. Learn how to integrate popular ML frameworks for easy model inference and prototyping, leverage Knative Serving for cost-effective autoscaling, build complex ML pipelines using inference graphs, and implement effective monitoring and deployment strategies. Gain practical insights into deploying your own models as various scenarios are described in detail, empowering you to take your ML deployments to the next level on Kubernetes.

Syllabus

Serverless Machine Learning Model Inference on Kubernetes with KServe by Stavros Kontopoulos


Taught by

Devoxx

Related Courses

Machine Learning in Fastly's Compute@Edge
Linux Foundation via YouTube
ModelMesh: Scalable AI Model Serving on Kubernetes
Linux Foundation via YouTube
MLSecOps - Automated Online and Offline ML Model Evaluations on Kubernetes
Linux Foundation via YouTube
Creating a Custom Serving Runtime in KServe ModelMesh - Hands-On Experience
Linux Foundation via YouTube
Integrating High Performance Feature Stores with KServe Model Serving
Linux Foundation via YouTube