How to Eliminate I/O Bottleneck and Continuously Feed GPU While Training in the Cloud
Offered By: Linux Foundation via YouTube
Course Description
Overview
Discover strategies to optimize I/O performance and maintain GPU utilization during machine learning model training in the cloud. This 27-minute conference talk explores the challenges of data-intensive training processes, focusing on the frequent I/O requirements of small files like images and audio. Learn about a novel architecture designed to enhance the entire data pipeline and sustain the high throughput demanded by GPUs. Gain insights into implementing this architecture for PyTorch workloads on Kubernetes in public cloud environments, addressing the unique data access patterns and I/O challenges specific to model training compared to traditional data analytics.
Syllabus
How to Eliminate the I/O Bottleneck and Continuously Feed the GPU While Training in the... - Lu Qiu
Taught by
Linux Foundation
Tags
Related Courses
Google Cloud Big Data and Machine Learning Fundamentals en EspañolGoogle Cloud via Coursera Data Analysis with Python
IBM via Coursera Intro to TensorFlow 日本語版
Google Cloud via Coursera TensorFlow on Google Cloud - Français
Google Cloud via Coursera Freedom of Data with SAP Data Hub
SAP Learning