Production Multi-node Jobs with Gang Scheduling, K8s, GPUs and RDMA
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore production multi-node job execution with gang scheduling, Kubernetes, GPUs, and RDMA in this conference talk from KubeCon + CloudNativeCon. Dive into the challenges and solutions for running distributed deep learning and machine learning workloads in shared Kubernetes clusters. Learn about distributed TensorFlow, PyTorch, Horovod, and MPI implementations, as well as the use of GPU nodes with NCCL and RDMA for accelerated performance. Discover the end-to-end flow for multi-node jobs in Kubernetes, including gang scheduling, quotas, fairness, and backfilling implemented in a custom GPU scheduler. Gain insights into high-speed networking through RoCE and SR-IOV/Multus CNI, and understand design choices, learnings, and operational experiences, including failure handling, performance optimization, and telemetry in large-scale distributed computing environments.
Syllabus
Intro
Deep Learning Applications
AL/DL: Models, Frameworks, Hardware
Trends: Big Data, Larger Models
Sample Multi-GPU Node: DGX-1
Distributed Training Applications Multi-GPU, Multi-node
K8s Challenges & Outline
Kes Orchestration Flow
Sample PyTorch Job Launch
Array Jobs and MPI Operator
SRIOV CNI for K8s Multi-Rail
Gang Scheduling Multi-Node Pods
PodGroup Queue and Manager
Demo
Sample Job Real-Time Telemetry
Sample BERT K8s Scaling
Shared K8s Cluster for Multi-node
Scheduler Dashboard
Summary and Future Work
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX