Distributed TensorFlow Training - Google I/O 2018
Offered By: TensorFlow via YouTube
Course Description
Overview
Learn how to efficiently scale machine learning model training across multiple GPUs and machines using TensorFlow's distribution strategies in this 35-minute Google I/O '18 conference talk. Explore the Distribution Strategy API, which enables distributed training with minimal code changes. Discover techniques for data parallelism, synchronous and asynchronous parameter updates, and model parallelism. Follow a demonstration of setting up distributed training on Google Cloud and examine performance benchmarks for ResNet50. Gain insights into optimizing input pipelines, including parallelizing file reading and transformations, pipelining with prefetching, and using fused transformation ops. Access additional resources and performance guides to further enhance your distributed TensorFlow training skills.
Syllabus
Intro
Training can take a long time
Scaling with Distributed Training
Data parallelism
Async Parameter Server
Sync Allreduce Architecture
Ring Allreduce Architecture
Model parallelism
Distribution Strategy API High Level API to distribute your training.
# Training with Estimator API
# Training on multiple GPUs with Distribution Strategy
Mirrored Strategy
Demo Setup on Google Cloud
Performance Benchmarks
N A simple input pipeline for ResNet58
Input pipeline as an ETL Process
Input pipeline bottleneck
Parallelize file reading
Parallelize sap for transformations
Pipelining with prefetching
Using fused transformation ops
Work In Progress
TensorFlow Resources
Taught by
TensorFlow
Related Courses
A Tour of Google Cloud SustainabilityGoogle via Google Cloud Skills Boost Accessing the Internet from Lambda in a VPC
Amazon Web Services via AWS Skill Builder Advanced Terraform with GCP
A Cloud Guru Choosing the Right Database Service on GCP
A Cloud Guru Cost Control on GCP
A Cloud Guru