YoVDO

Optimizing Large-Scale Model Training with Ray Compiled Graphs

Offered By: Anyscale via YouTube

Tags

Machine Learning Courses Distributed Computing Courses Model Training Courses Multimodal AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore advanced techniques for training large-scale models in this Ray Summit 2024 conference talk. Discover how Ray Core's latest features enhance training efficiency for LLMs and multimodal AI models. Learn about Ray's native GPU-GPU communication and pre-compiled execution paths, and their application in complex data and control flows for distributed model training. Gain insights into implementing pipeline parallelism and training multimodal models using heterogeneous GPUs. Compare Ray implementations with NCCL and PyTorch, focusing on simplicity and maintainability. Examine benchmarks on throughput and GPU utilization to understand the practical benefits of these optimizations. Ideal for ML researchers and engineers working on large-scale AI projects, this talk provides valuable knowledge on maximizing accelerator utilization and improving training efficiency in the era of increasingly large and complex models.

Syllabus

Optimizing Large-Scale Model Training with Ray Compiled Graphs | Ray Summit 2024


Taught by

Anyscale

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent