YoVDO

StreamBox - A Lightweight GPU Sandbox for Serverless Inference Workflow

Offered By: USENIX via YouTube

Tags

CUDA Courses Memory Management Courses Inference Courses Serverless Computing Courses Deep Neural Networks Courses Sandboxing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking conference talk on StreamBox, a lightweight GPU sandbox designed for serverless inference workflows. Delve into the challenges of dynamic workloads and latency-sensitive DNN inference in serverless computing environments. Discover how StreamBox addresses the limitations of existing serverless inference systems by implementing fine-grained and auto-scaling memory management, enabling transparent and efficient intra-GPU communication across functions, and facilitating PCIe bandwidth sharing among concurrent streams. Learn about the significant improvements StreamBox offers, including up to 82% reduction in GPU memory footprint and a 6.7X increase in throughput compared to state-of-the-art systems. Gain insights into the potential impact of this innovative approach on scalable DNN inference serving and the future of serverless computing for GPU-intensive tasks.

Syllabus

USENIX ATC '24 - StreamBox: A Lightweight GPU SandBox for Serverless Inference Workflow


Taught by

USENIX

Related Courses

Discrete Inference and Learning in Artificial Vision
École Centrale Paris via Coursera
Teaching Literacy Through Film
The British Film Institute via FutureLearn
Linear Regression and Modeling
Duke University via Coursera
Probability and Statistics
Stanford University via Stanford OpenEdx
Statistical Reasoning
Stanford University via Stanford OpenEdx