YoVDO

How Cookpad Leverages Triton Inference Server to Boost Model Serving

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Machine Learning Courses GPU Computing Courses Scalability Courses Model Deployment Courses Infrastructure Management Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how Cookpad optimizes its machine learning model deployment using Triton Inference Server in this 32-minute conference talk. Learn about the challenges faced by Machine Learning Platform teams when scaling model deployment, including managing diverse frameworks and infrastructure requirements. Explore how Triton Inference Server, an open-source tool from Nvidia, simplifies the deployment process and improves resource utilization. Gain insights into deploying concurrent models on single GPU or CPU and multi-GPU servers, and understand how Cookpad's ML Platform Engineers leverage this technology to boost their model serving capabilities.

Syllabus

How Cookpad Leverages Triton Inference Server To Boost Their Model S... Jose Navarro & Prayana Galih


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Developing a Tabular Data Model
Microsoft via edX
Data Science in Action - Building a Predictive Churn Model
SAP Learning
Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera
Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera
Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera