YoVDO

TACCL - Guiding Collective Algorithm Synthesis Using Communication Sketches

Offered By: USENIX via YouTube

Tags

USENIX Symposium on Networked Systems Design and Implementation (NSDI) Courses Machine Learning Courses GPU Computing Courses Algorithm Optimization Courses Distributed Training Courses

Course Description

Overview

Explore a groundbreaking 15-minute conference talk from USENIX's NSDI '23 that introduces TACCL, an innovative tool for optimizing machine learning model training across multiple GPUs and servers. Delve into the challenges of efficient collective communication in distributed training environments and discover how TACCL leverages a novel communication sketch abstraction to guide algorithm synthesis. Learn about TACCL's ability to generate optimized algorithms for various hardware configurations and communication collectives, significantly outperforming existing solutions like the Nvidia Collective Communication Library. Gain insights into the tool's impact on speeding up end-to-end training of popular models such as Transformer-XL and BERT, with impressive performance improvements ranging from 11% to 2.3x for different batch sizes.

Syllabus

NSDI '23 - TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches


Taught by

USENIX

Related Courses

Building Language Models on AWS
Amazon Web Services via AWS Skill Builder
Building Language Models on AWS (Japanese)
Amazon Web Services via AWS Skill Builder
Building Language Models on AWS (Japanese) 日本語字幕版
Amazon Web Services via AWS Skill Builder
Building Language Models on AWS (Japanese) (Sub) 日本語字幕版
Amazon Web Services via AWS Skill Builder
Intel® Solutions Pro – AI in the Cloud
Intel via Coursera