YoVDO

Develop ML Interactive GPU-Workflows with Visual Studio Code, Docker and Dockerhub

Offered By: Docker via YouTube

Tags

Conference Talks Courses Machine Learning Courses C++ Courses TensorFlow Courses Docker Courses PyTorch Courses CUDA Courses Visual Studio Code Courses

Course Description

Overview

Explore a 45-minute conference talk on developing ML interactive GPU workflows using Visual Studio Code, Docker, and Dockerhub. Learn how to overcome common challenges such as CUDA errors, GPU detection issues, and custom C++ code loading problems. Discover the power of nvidia-containers and how Docker can be leveraged to manage multiple CUDA and NVCC versions while developing inside GPU-enabled containers. Gain insights into the history of GPU development, driver installation, and technology adaptation. Understand the anatomy of base images, guidelines for building GPU images, and considerations for running GPU containers. Follow along as the speaker demonstrates how to configure a project and streamline the development process for machine learning workflows.

Syllabus

Intro
A history perspective
Install the drivers Let us summarize the experience did you know....
Assume you have GPU Driver installation
Technology adapts
Solving the issue of injecting GPU Devices
External Requirements
Reduced complexity
Anatomy and Structure of Base Images Common guidelines
Considerations while building GPU Images Pinpoint your dependencies
Considerations while running GPU Containers
Configuring a project


Taught by

Docker

Related Courses

Computer Graphics
University of California, San Diego via edX
Intro to Parallel Programming
Nvidia via Udacity
Initiation à la programmation (en C++)
École Polytechnique Fédérale de Lausanne via Coursera
C++ For C Programmers, Part A
University of California, Santa Cruz via Coursera
Introduction à la programmation orientée objet (en C++)
École Polytechnique Fédérale de Lausanne via Coursera