How to Deploy ML Models in Production with BentoML
Offered By: Valerio Velardo - The Sound of AI via YouTube
Course Description
Overview
Learn how to deploy Machine Learning models into production using BentoML in this comprehensive tutorial video. Explore the installation process for BentoML, save ML models to BentoML's local store, create a BentoML service, build and containerize a bento with Docker, and send requests to receive inferences. Follow along as the instructor demonstrates training a simple ConvNet model on MNIST, saving a Keras model, and running a BentoML service via Docker. Gain insights into deployment options such as Kubernetes and Cloud platforms, and access accompanying code on GitHub for hands-on practice.
Syllabus
Intro
BentoML deployment steps
Installing BentoML and other requirements
Training a simple ConvNet model on MNIST
Saving Keras model to BentoML local store
Creating BentoML service
Sending requests to BentoML service
Creating a bento
Serving a model through a bento
Dockerise a bento
Run BentoML service via Docker
Deployment options: Kubernetes + Cloud
Outro
Taught by
Valerio Velardo - The Sound of AI
Related Courses
Software as a ServiceUniversity of California, Berkeley via Coursera Software Defined Networking
Georgia Institute of Technology via Coursera Pattern-Oriented Software Architectures: Programming Mobile Services for Android Handheld Systems
Vanderbilt University via Coursera Web-Technologien
openHPI Données et services numériques, dans le nuage et ailleurs
Certificat informatique et internet via France Université Numerique