Stanford Seminar - HPC Opportunities in Deep Learning - Greg Diamos, Baidu
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the intersection of High-Performance Computing (HPC) and Deep Learning in this Stanford seminar featuring Greg Diamos from Baidu. Delve into recent successes in the field, including Deep Speech 2, and understand how deep learning scales. Examine the opportunities for HPC in this domain, considering workload characteristics and potential pitfalls. Learn about the importance of dense compute, fast interconnects, and elastic SGD. Discover optimized kernels, specialized I/O systems, and memory-efficient backpropagation techniques. Investigate model parallelism and the challenges and benefits of low-precision training. Gain insights into the future of HPC in deep learning and the critical factors to consider for advancing this rapidly evolving field.
Syllabus
Introduction.
Success this year.
Deep speech 2.
Deep learning scales.
The opportunity for HPC.
Workload characteristics.
Beware of ignoring work efficiency.
Beware of ignoring speed of light.
Dense compute.
Fast Interconnects.
Elastic SGD.
Optimized kernels.
Specialized 10 systems.
Memory efficient back propagation.
Model parallelism.
Low precision training.
Low precision issues.
Taught by
Stanford Online
Tags
Related Courses
High Performance ComputingGeorgia Institute of Technology via Udacity Введение в параллельное программирование с использованием OpenMP и MPI
Tomsk State University via Coursera High Performance Computing in the Cloud
Dublin City University via FutureLearn Production Machine Learning Systems
Google Cloud via Coursera LAFF-On Programming for High Performance
The University of Texas at Austin via edX