YoVDO

ROLLER - Fast and Efficient Tensor Compilation for Deep Learning

Offered By: USENIX via YouTube

Tags

OSDI (Operating Systems Design and Implementation) Courses Deep Learning Courses GPU Acceleration Courses

Course Description

Overview

Explore a groundbreaking approach to tensor compilation for deep learning in this 15-minute conference talk from OSDI '22. Delve into ROLLER, a novel system that dramatically reduces kernel generation time from hours to seconds while maintaining competitive performance. Learn about the innovative rTile abstraction and recursive construction algorithm that enable efficient execution across various accelerators, including GPUs and IPUs. Discover how ROLLER's white-box solution addresses the challenges of excessive compilation times and large search spaces in existing DNN compilers. Examine the system's performance on different hardware, its ability to handle small and irregular shapes, and its impact on improving pipeline throughput. Gain insights into the future of deep learning development cycles and custom kernel creation for new hardware vendors.

Syllabus

Intro
Excessive Compilation Time
Black-Box Compiler
Motivating Example: 8k^3 matmul
Roller: A White-Box Solution
Improving Pipeline Throughput
Abstracted GPU (V100 Example)
Small & Irregular Shapes
Evaluations - V100 Performance
Evaluation Compilation Time
Summary


Taught by

USENIX

Related Courses

GraphX - Graph Processing in a Distributed Dataflow Framework
USENIX via YouTube
Theseus - An Experiment in Operating System Structure and State Management
USENIX via YouTube
RedLeaf - Isolation and Communication in a Safe Operating System
USENIX via YouTube
Microsecond Consensus for Microsecond Applications
USENIX via YouTube
KungFu - Making Training in Distributed Machine Learning Adaptive
USENIX via YouTube