Distributed TensorFlow Training - Google I/O 2018
Offered By: TensorFlow via YouTube
Course Description
Overview
Learn how to efficiently scale machine learning model training across multiple GPUs and machines using TensorFlow's distribution strategies in this 35-minute Google I/O '18 conference talk. Explore the Distribution Strategy API, which enables distributed training with minimal code changes. Discover techniques for data parallelism, synchronous and asynchronous parameter updates, and model parallelism. Follow a demonstration of setting up distributed training on Google Cloud and examine performance benchmarks for ResNet50. Gain insights into optimizing input pipelines, including parallelizing file reading and transformations, pipelining with prefetching, and using fused transformation ops. Access additional resources and performance guides to further enhance your distributed TensorFlow training skills.
Syllabus
Intro
Training can take a long time
Scaling with Distributed Training
Data parallelism
Async Parameter Server
Sync Allreduce Architecture
Ring Allreduce Architecture
Model parallelism
Distribution Strategy API High Level API to distribute your training.
# Training with Estimator API
# Training on multiple GPUs with Distribution Strategy
Mirrored Strategy
Demo Setup on Google Cloud
Performance Benchmarks
N A simple input pipeline for ResNet58
Input pipeline as an ETL Process
Input pipeline bottleneck
Parallelize file reading
Parallelize sap for transformations
Pipelining with prefetching
Using fused transformation ops
Work In Progress
TensorFlow Resources
Taught by
TensorFlow
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera LLM Server
Pragmatic AI Labs via edX AI Infrastructure and Operations Fundamentals
Nvidia via Coursera Open Source LLMOps Solutions
Duke University via Coursera Deep Learning - Computer Vision for Beginners Using PyTorch
Packt via Coursera