YoVDO

Using GPUs to Scale and Speed-up Deep Learning

Offered By: IBM via edX

Tags

Deep Learning Courses Computer Vision Courses Cloud Computing Courses TensorFlow Courses Image Classification Courses Distributed Computing Courses Object Recognition Courses GPU Acceleration Courses

Course Description

Overview

Please Note: Learners who successfully complete this IBM course can earn a skill badge — a detailed, verifiable and digital credential that profiles the knowledge and skills you’ve acquired in this course. Enroll to learn more, complete the course and claim your badge!

Training acomplex deep learning model with a very large data set can take hours, days and occasionally weeks to train. So, what is the solution? Accelerated hardware.

You can use accelerated hardware such as Google’s Tensor Processing Unit (TPU) or Nvidia GPU to accelerate your convolutional neural network computations time on the Cloud. These chips are specifically designed to support the training of neural networks, as well as the use of trained networks (inference). Accelerated hardware has recently been proven to significantly reduce training time.

But the problem is that your data might be sensitiveand you may not feel comfortable uploading it on a public cloud, preferring to analyze it on-premise. In this case, you need to use an in-house system with GPU support. One solution is to use IBM’s Power Systems with Nvidia GPU and Power AI. The Power AI platform supports popular machine learning libraries and dependencies including Tensorflow, Caffe, Torch, and Theano.

In this course, you'll understand what GPU-based accelerated hardware is and how it can benefit your deep learning scaling needs. You'll also deploy deep learning networks on GPU accelerated hardware for several problems, including the classification of images and videos.


Syllabus

Module 1 – Quick review of Deep Learning
Intro to Deep Learning
Deep Learning Pipeline

Module 2 – Hardware Accelerated Deep Learning
How to accelerate a deep learning model?
Running TensorFlow operations on CPUs vs. GPUs
Convolutional Neural Networks on GPUs
Recurrent Neural Networks on GPUs

Module 3 – Deep Learning in the Cloud
Deep Learning in the Cloud
How does one use a GPU

Module 4 – Distributed Deep Learning
* Distributed Deep Learning

Module 5 – PowerAI vision
Computer vision
Image Classification
* Object recognition in Videos.


Taught by

SAEED AGHABOZORGI

Tags

Related Courses

Advanced AI Techniques for the Supply Chain
LearnQuest via Coursera
Advanced Computer Vision with TensorFlow
DeepLearning.AI via Coursera
Analizando imágenes con Amazon Rekognition
Coursera Project Network via Coursera
AutoML avec AutoKeras - Classification d'images
Coursera Project Network via Coursera
AWS SimuLearn: Get Home Safe
Amazon Web Services via AWS Skill Builder