YoVDO

AntMan - Dynamic Scaling on GPU Cluster for Deep Learning

Offered By: USENIX via YouTube

Tags

OSDI (Operating Systems Design and Implementation) Courses Memory Management Courses Benchmarking Courses

Course Description

Overview

Explore a conference talk on AntMan, a deep learning infrastructure designed to efficiently manage and scale GPU resources for complex deep learning workloads. Discover how this system, deployed at Alibaba, improves GPU utilization by dynamically scaling memory and computation within deep learning frameworks. Learn about the co-design of cluster schedulers with deep learning frameworks, enabling multiple jobs to share GPU resources without compromising performance. Gain insights into how AntMan addresses the challenges of fluctuating resource demands in deep learning training jobs, resulting in significant improvements in GPU memory and computation unit utilization. Understand the unique approach to efficiently utilizing GPUs at scale, which has implications for job performance, system throughput, and hardware utilization in large-scale deep learning environments.

Syllabus

Intro
Deep Learning in productions
Observations: Low utilization
Opportunities
Outline
Dynamic scaling memory
Dynamic scaling computation Exclusive mode
AntMan architecture
Micro-benchmark: Memory grow-shrink
Micro-benchmark: Adaptive computation
Trace experiment
Large-scale experiment
Conclusion AntMan: Dynamic Scaling on GPU Clusters for Deep Learning


Taught by

USENIX

Related Courses

GraphX - Graph Processing in a Distributed Dataflow Framework
USENIX via YouTube
Theseus - An Experiment in Operating System Structure and State Management
USENIX via YouTube
RedLeaf - Isolation and Communication in a Safe Operating System
USENIX via YouTube
Microsecond Consensus for Microsecond Applications
USENIX via YouTube
KungFu - Making Training in Distributed Machine Learning Adaptive
USENIX via YouTube