Minimizing GPU Cost for Deep Learning on Kubernetes
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore a GPU sharing solution for native Kubernetes to minimize costs and improve efficiency in deep learning tasks. Learn how to define GPU sharing API, implement scheduling without modifying core scheduler code, and integrate GPU isolation with Kubernetes. Discover techniques to run multiple TensorFlow jobs on a single GPU device within a Kubernetes cluster, significantly enhancing GPU usage for AI model development, debugging, and inference services. Gain insights from Alibaba experts on addressing the challenge of wasted GPU resources in clusters and optimizing deep learning workflows on Kubernetes.
Syllabus
Minimizing GPU Cost for Your Deep Learning on Kubernetes - Kai Zhang & Yang Che, Alibaba
Taught by
Linux Foundation
Tags
Related Courses
Software as a ServiceUniversity of California, Berkeley via Coursera Software Defined Networking
Georgia Institute of Technology via Coursera Pattern-Oriented Software Architectures: Programming Mobile Services for Android Handheld Systems
Vanderbilt University via Coursera Web-Technologien
openHPI Données et services numériques, dans le nuage et ailleurs
Certificat informatique et internet via France Université Numerique