Accelerating Collective Communication in Data Parallel Training across Deep Learning Frameworks
Offered By: USENIX via YouTube
Course Description
Overview
Explore a 17-minute conference talk from USENIX NSDI '22 that delves into accelerating collective communication in data parallel training across deep learning frameworks. Learn about new techniques developed within Horovod, a generic communication library, to improve the control plane and enhance performance in large-scale distributed training. Discover how the researchers implemented a caching strategy and decentralized orchestration to optimize the coordinator-worker logic, and introduced a feature for users to group collective operations for finer control over communication buffer sizes. Examine the experimental results conducted on the Summit supercomputer, comparing the proposed strategies against Horovod's original design, tf.distribute, torch.DDP, and BytePS. Gain insights into the impressive performance improvements achieved, including a 2x speedup at 6000 GPUs scale and near-linear scaling of 0.93 with 1.54 exaflops sustained performance using 27,600 GPUs on a scientific application (STEMDL).
Syllabus
NSDI '22 - Accelerating Collective Communication in Data Parallel Training across Deep Learning...
Taught by
USENIX
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX