YoVDO

VeScale - A PyTorch Native LLM Training Framework

Offered By: Linux Foundation via YouTube

Tags

PyTorch Courses Deep Learning Courses Distributed Training Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge PyTorch native LLM training framework in this informative conference talk by Hongyu Zhu from ByteDance. Delve into the world of VeScale, a novel solution designed to address the challenges of distributed training for large language models. Learn how this framework combines PyTorch nativeness with automatic parallelism to simplify the scaling of LLM training. Discover the advantages of VeScale's approach, which allows developers to write single-device PyTorch code while automatically parallelizing it into nD parallelism. Gain insights into the importance of ease of use in industry-level frameworks and how VeScale aims to bridge the gap between PyTorch ecosystem dominance and the complex requirements of training giant models. Understand the limitations of existing frameworks and how VeScale's innovative approach seeks to overcome them, potentially revolutionizing the landscape of LLM training in both research and industry settings.

Syllabus

VeScale: A PyTorch Native LLM Training Framework - Hongyu Zhu, ByteDance


Taught by

Linux Foundation

Tags

Related Courses

Custom and Distributed Training with TensorFlow
DeepLearning.AI via Coursera
Architecting Production-ready ML Models Using Google Cloud ML Engine
Pluralsight
Building End-to-end Machine Learning Workflows with Kubeflow
Pluralsight
Deploying PyTorch Models in Production: PyTorch Playbook
Pluralsight
Inside TensorFlow
TensorFlow via YouTube