Serverless Machine Learning Model Inference on Kubernetes with KServe
Offered By: Devoxx via YouTube
Course Description
Overview
Explore serverless machine learning model inference on Kubernetes using KServe in this 38-minute conference talk from Devoxx. Learn how to integrate popular ML frameworks for easy model inference and prototyping, leverage Knative Serving for cost-effective autoscaling, build complex ML pipelines using inference graphs, and implement effective monitoring and deployment strategies. Gain practical insights into deploying your own models as various scenarios are described in detail, empowering you to take your ML deployments to the next level on Kubernetes.
Syllabus
Serverless Machine Learning Model Inference on Kubernetes with KServe by Stavros Kontopoulos
Taught by
Devoxx
Related Courses
Designing Highly Scalable Web Apps on Google Cloud PlatformGoogle via Coursera Elastic Google Cloud Infrastructure: Scaling and Automation
Google Cloud via Coursera Elastic Cloud Infrastructure: Scaling and Automation auf Deutsch
Google Cloud via Coursera Elastic Cloud Infrastructure: Scaling and Automation en Français
Google Cloud via Coursera Alibaba Cloud Native Solutions and Container Service
Alibaba via Coursera