YoVDO

Fine-Tuning Giant Neural Networks on Commodity Hardware with Automatic Pipeline Model Parallelism

Offered By: USENIX via YouTube

Tags

USENIX Annual Technical Conference Courses Natural Language Processing (NLP) Courses Neural Networks Courses Transformers Courses Fine-Tuning Courses

Course Description

Overview

Explore a groundbreaking approach to fine-tuning giant neural networks on commodity hardware in this 14-minute conference talk from USENIX ATC '21. Delve into FTPipe, an innovative system that introduces a new dimension of pipeline model parallelism, making multi-GPU execution of fine-tuning tasks for massive neural networks accessible on standard equipment. Learn about the novel Mixed-pipe approach to model partitioning and task allocation, which allows for more flexible and efficient use of GPU resources without compromising accuracy. Discover how this technique achieves up to 3× speedup and state-of-the-art accuracy when fine-tuning giant transformers with billions of parameters, such as BERT-340M, GPT2-1.5B, and T5-3B, on commodity RTX2080-Ti GPUs. Gain insights into the potential of this technology to democratize access to state-of-the-art models pre-trained on high-end supercomputing systems.

Syllabus

USENIX ATC '21 - Fine-tuning giant neural networks on commodity hardware with automatic pipeline...


Taught by

USENIX

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX