YoVDO

How to Deploy ML Models in Production with BentoML

Offered By: Valerio Velardo - The Sound of AI via YouTube

Tags

Docker Courses Cloud Computing Courses Kubernetes Courses Inference Courses Model Training Courses

Course Description

Overview

Learn how to deploy Machine Learning models into production using BentoML in this comprehensive tutorial video. Explore the installation process for BentoML, save ML models to BentoML's local store, create a BentoML service, build and containerize a bento with Docker, and send requests to receive inferences. Follow along as the instructor demonstrates training a simple ConvNet model on MNIST, saving a Keras model, and running a BentoML service via Docker. Gain insights into deployment options such as Kubernetes and Cloud platforms, and access accompanying code on GitHub for hands-on practice.

Syllabus

Intro
BentoML deployment steps
Installing BentoML and other requirements
Training a simple ConvNet model on MNIST
Saving Keras model to BentoML local store
Creating BentoML service
Sending requests to BentoML service
Creating a bento
Serving a model through a bento
Dockerise a bento
Run BentoML service via Docker
Deployment options: Kubernetes + Cloud
Outro


Taught by

Valerio Velardo - The Sound of AI

Related Courses

How Google does Machine Learning en EspaƱol
Google Cloud via Coursera
Creating Custom Callbacks in Keras
Coursera Project Network via Coursera
Automatic Machine Learning with H2O AutoML and Python
Coursera Project Network via Coursera
AI in Healthcare Capstone
Stanford University via Coursera
AutoML con Pycaret y TPOT
Coursera Project Network via Coursera