YoVDO

Efficient Large-Scale Language Model Training on GPU Clusters

Offered By: Databricks via YouTube

Tags

Machine Learning Courses GPU Computing Courses Distributed Computing Courses Parallel Processing Courses Model Training Courses

Course Description

Overview

Explore efficient large-scale language model training on GPU clusters in this 23-minute video from Databricks. Learn about the challenges of training massive models, including GPU memory limitations and lengthy computation times. Discover how to combine tensor, pipeline, and data parallelism methods to scale training to thousands of GPUs, enabling a hundredfold increase in model size capacity. Examine a novel pipeline parallelism schedule that boosts throughput by over 10% compared to existing approaches. Gain insights into the trade-offs between different parallelism techniques and how to optimize distributed training configurations. See how these combined methods achieve 502 petaFLOP/s performance on a 1 trillion parameter model using 3072 GPUs, with 52% of peak per-GPU throughput. Access the open-source code and understand the implementation details for domain-specific optimizations and improved GPU utilization.

Syllabus

Introduction
GPU Cluster
Model Training Graph
Training
Idle Periods
Pipelining
Pipeline Bubble
Tradeoffs
Interleave Schedule
Results
Hyperparameters
DomainSpecific Optimization
GPU throughput
Implementation
Conclusion


Taught by

Databricks

Related Courses

How Google does Machine Learning en EspaƱol
Google Cloud via Coursera
Creating Custom Callbacks in Keras
Coursera Project Network via Coursera
Automatic Machine Learning with H2O AutoML and Python
Coursera Project Network via Coursera
AI in Healthcare Capstone
Stanford University via Coursera
AutoML con Pycaret y TPOT
Coursera Project Network via Coursera