How to Eliminate I/O Bottleneck and Continuously Feed GPU While Training in the Cloud
Offered By: Linux Foundation via YouTube
Course Description
Overview
Discover strategies to optimize I/O performance and maintain GPU utilization during machine learning model training in the cloud. This 27-minute conference talk explores the challenges of data-intensive training processes, focusing on the frequent I/O requirements of small files like images and audio. Learn about a novel architecture designed to enhance the entire data pipeline and sustain the high throughput demanded by GPUs. Gain insights into implementing this architecture for PyTorch workloads on Kubernetes in public cloud environments, addressing the unique data access patterns and I/O challenges specific to model training compared to traditional data analytics.
Syllabus
How to Eliminate the I/O Bottleneck and Continuously Feed the GPU While Training in the... - Lu Qiu
Taught by
Linux Foundation
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent