Efficient Distributed Deep Learning Using MXNet
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore efficient distributed deep learning techniques using MXNet in this 45-minute lecture by Anima Anandkumar from UC Irvine. Delve into practical considerations for machine learning, challenges in deploying large-scale learning, and declarative programming. Discover MXNet's mixed programming paradigm and hierarchical parameter server. Examine tensor contraction as a layer and learn about Amazon AI services like Rekognition for object, scene, and facial analysis, as well as Polly for voice quality and pronunciation. Gain insights into computational challenges in machine learning and strategies for writing parallel programs in this comprehensive talk from the Simons Institute's Computational Challenges in Machine Learning series.
Syllabus
Intro
PRACTICAL CONSIDERATIONS FOR MACHINE LEARNING
CHALLENGES IN DEPLOYING LARGE-SCALE LEARNING
DECLARATIVE PROGRAMMING
MXNET: MIXED PROGRAMMING PARADIGM
WRITING PARALLEL PROGRAMS IS HARD
HIERARCHICAL PARAMETER SERVER IN MXNET
TENSORS, DEEP LEARNING & MXNET
TENSOR CONTRACTION AS A LAYER
Introducing Amazon Al
Rekognition: Object & Scene Detection
Rekognition: Facial Analysis
Polly: A Focus On Voice Quality & Pronunciation
Taught by
Simons Institute
Related Courses
Challenges and Opportunities in Applying Machine Learning - Alex Jaimes - ODSC East 2018Open Data Science via YouTube Benchmarks and How-Tos for Convolutional Neural Networks on HorovodRunner-Enabled Apache Spark Clusters
Databricks via YouTube SHADE - Enable Fundamental Cacheability for Distributed Deep Learning Training
USENIX via YouTube Alpa - Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning
USENIX via YouTube Horovod - Distributed Deep Learning for Reliable MLOps
Linux Foundation via YouTube