VeScale - A PyTorch Native LLM Training Framework for Automatic Parallelism
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore a groundbreaking PyTorch native framework for large language model (LLM) training in this 24-minute conference talk by Hongyu Zhu from ByteDance. Learn about VeScale, a novel solution that combines PyTorch nativeness with automatic parallelism to address the challenges of distributed training for giant LLMs. Discover how this framework prioritizes ease of use, allowing developers to write single-device PyTorch code while automatically parallelizing it into nD parallelism. Gain insights into the importance of PyTorch ecosystem dominance and the necessity of complex nD parallelism for training massive models. Understand the limitations of existing industry-level frameworks and how VeScale aims to overcome them by offering a user-friendly approach to scaling LLM training.
Syllabus
VeScale: A PyTorch Native LLM Training Framework | veScale:一个PyTorch原生LLM训练框架 - Hongyu Zhu
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Deep Learning with Python and PyTorch.IBM via edX Introduction to Machine Learning
Duke University via Coursera How Google does Machine Learning em Português Brasileiro
Google Cloud via Coursera Intro to Deep Learning with PyTorch
Facebook via Udacity Secure and Private AI
Facebook via Udacity