Productionizing Real-time Serving with MLflow
Offered By: Databricks via YouTube
Course Description
Overview
Explore how to elevate MLflow serving to production-grade status in this 28-minute conference talk from Databricks. Learn about deploying machine learning models as REST API endpoints and advancing them to containerized production environments. Discover techniques for implementing custom middlewares, monitoring, logging, and performance optimization for high-scale applications. Gain insights into Yotpo's approach to making MLflow serving production-ready, covering topics such as continuous delivery, request transformation, exporting metrics, deployment strategies, and monitoring best practices. Delve into optimizations and control mechanisms to enhance your MLflow serving capabilities for real-world, high-performance scenarios.
Syllabus
Introduction
Continuous Delivery
MLflow Serving
Request Transformation
Exporting Metrics
Deployment
Monitoring
Optimizations
Control
Taught by
Databricks
Related Courses
Fundamentals of Containers, Kubernetes, and Red Hat OpenShiftRed Hat via edX Configuration Management for Containerized Delivery
Microsoft via edX Getting Started with Google Kubernetes Engine - Español
Google Cloud via Coursera Getting Started with Google Kubernetes Engine - 日本語版
Google Cloud via Coursera Architecting with Google Kubernetes Engine: Foundations en Español
Google Cloud via Coursera