How to Eliminate I/O Bottleneck and Continuously Feed GPU While Training in the Cloud
Offered By: Linux Foundation via YouTube
Course Description
Overview
Discover strategies to optimize I/O performance and maintain GPU utilization during machine learning model training in the cloud. This 27-minute conference talk explores the challenges of data-intensive training processes, focusing on the frequent I/O requirements of small files like images and audio. Learn about a novel architecture designed to enhance the entire data pipeline and sustain the high throughput demanded by GPUs. Gain insights into implementing this architecture for PyTorch workloads on Kubernetes in public cloud environments, addressing the unique data access patterns and I/O challenges specific to model training compared to traditional data analytics.
Syllabus
How to Eliminate the I/O Bottleneck and Continuously Feed the GPU While Training in the... - Lu Qiu
Taught by
Linux Foundation
Tags
Related Courses
Deep Learning with Python and PyTorch.IBM via edX Introduction to Machine Learning
Duke University via Coursera How Google does Machine Learning em Português Brasileiro
Google Cloud via Coursera Intro to Deep Learning with PyTorch
Facebook via Udacity Secure and Private AI
Facebook via Udacity