YoVDO

Unifying Real-Time and Batch ML Inference Using BentoML and Apache Spark

Offered By: The ASF via YouTube

Tags

Machine Learning Courses Python Courses Apache Spark Courses Distributed Computing Courses Batch Processing Courses Model Deployment Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to unify real-time and batch machine learning inference using BentoML and Apache Spark in this 28-minute conference talk. Learn from Bo Jiang, a Product Engineer at BentoML, as he explores the integration of these powerful tools. Gain insights into packaging models with BentoML, deploying BentoServices to production, and invoking them from Spark for scalable batch inference. Understand how to leverage the same models for both real-time and batch predictions, ensuring consistency in inference logic across different workloads. Explore the run_in_spark API, which automatically distributes models and inference logic across Spark worker nodes. Discover how this unified approach eliminates concerns about divergence in inference logic, promotes version control, and maintains consistent library dependencies. Master the art of managing both real-time and batch inference under the same standards, ultimately fostering efficient AI service development and deployment.

Syllabus

Unifying Real-Time And Batch Ml Inference Using Bentoml And Apache Spark


Taught by

The ASF

Related Courses

Cloud Computing Concepts, Part 1
University of Illinois at Urbana-Champaign via Coursera
Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera
Reliable Distributed Algorithms - Part 1
KTH Royal Institute of Technology via edX
Introduction to Apache Spark and AWS
University of London International Programmes via Coursera
Réalisez des calculs distribués sur des données massives
CentraleSupélec via OpenClassrooms