YoVDO

Scaling up Without Slowing Down: Accelerating Pod Start Time in Kubernetes

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Kubernetes Courses Performance Tuning Courses Container Orchestration Courses Deep Learning Inference Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for accelerating pod start times in Kubernetes environments in this 29-minute conference talk from the Cloud Native Computing Foundation (CNCF). Discover various open-source approaches to address cold start times of Kubernetes pods, including on-demand image loading, peer-to-peer systems, pre-warming nodes, and checkpoint and restore techniques. Learn how to optimize different workload types, such as deep learning inference and ML training, and understand the latency tradeoffs during the entire pod lifecycle. Examine the impact of proposed solutions on network congestion, node storage utilization, and reliability. Gain insights into selecting the optimal approach for your specific Kubernetes workloads, considering factors like runtime behavior and system scale. Presented by Ganeshkumar Ashokavardhanan from Microsoft and Yifan Yuan from AlibabaCloud, this talk provides a comprehensive framework for improving pod start times and enhancing overall system efficiency.

Syllabus

Scaling up Without Slowing Down: Accelerating Pod Start Time


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity
Introduction to AWS Inferentia and Amazon EC2 Inf1 Instances (Italian)
Amazon Web Services via AWS Skill Builder
Introduction to AWS Inferentia and Amazon EC2 Inf1 Instances (Japanese) (日本語吹き替え版)
Amazon Web Services via AWS Skill Builder
Introduction to AWS Inferentia and Amazon EC2 Inf1 Instances (Korean)
Amazon Web Services via AWS Skill Builder
Acceleration of Deep Learning Inference on Raspberry Pi's VideoCore GPU
tinyML via YouTube