Scale and Accelerate Distributed Model Training in Kubernetes Clusters
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore how to scale and accelerate distributed model training in Kubernetes clusters in this 49-minute conference talk from MLOps World: Machine Learning in Production. Learn from Jack Jin, Lead ML Infrastructure Engineer at Zoom, as he shares insights on orchestrating Deep Learning workloads across multiple GPUs and nodes. Discover how Kubernetes and Kubeflow PytorchJob can be leveraged to schedule and track distributed training jobs on multi-GPU single-node and multi-GPU multi-node setups within a shared GPU resource pool. Gain knowledge about accelerating deep learning training at Zoom through the implementation of RDMA and RoCE technologies to bypass the CPU kernel and offload the TCP/IP protocol. Understand the application of these technologies in Kubernetes using SRIOV by NVIDIA Network Operator in heterogeneous GPU clusters, and learn how to achieve near-linear performance increases as GPU numbers and worker nodes scale up.
Syllabus
Scale and Accelerate the Distributed Model Training in Kubernetes Cluster
Taught by
MLOps World: Machine Learning in Production
Related Courses
Building End-to-end Machine Learning Workflows with KubeflowPluralsight Smart Analytics, Machine Learning, and AI on GCP
Pluralsight Leveraging Cloud-Based Machine Learning on Google Cloud Platform: Real World Applications
LinkedIn Learning Distributed TensorFlow - TensorFlow at O'Reilly AI Conference, San Francisco '18
TensorFlow via YouTube KFServing - Model Monitoring with Apache Spark and Feature Store
Databricks via YouTube