Cloud TPU Pods - AI Supercomputing for Large Machine Learning Problems
Offered By: TensorFlow via YouTube
Course Description
Overview
Explore the technical details of Cloud TPU and Cloud TPU Pod in this 42-minute conference talk from Google I/O'19. Dive into the domain-specific architecture designed to accelerate TensorFlow training and prediction workloads, providing performance benefits for machine learning production use. Discover new TensorFlow features enabling large-scale model parallelism for deep learning training. Learn about GPU V2, object detection, Google Cloud Platform Notebook, interconnect technology, pricing considerations, and scalability. Gain insights into model parallel techniques, training transformer models, and optimizing learning rate schedules. Presented by Kaz Sato and Martin Gorner, this talk covers topics such as accuracy boosting, data power, complexities in AI supercomputing, and includes a demo showcasing the capabilities of Cloud TPU Pods for large machine learning problems.
Syllabus
Introduction
GPU V2
Tensorflow
Chaos
Models
Object Detection
Google Cloud Platform Notebook
The Interconnect
Pricing
Cost
eBay
Accuracy Boost
Data Power
Scalability
Complexities
Model Parallel
Masters of Law
Magenta Fraud
Training Transformer Model
Anjana
Demo
Feeds
Training Time
Learning Rate Schedule
Summary
Taught by
TensorFlow
Related Courses
Creative Applications of Deep Learning with TensorFlowKadenze Creative Applications of Deep Learning with TensorFlow III
Kadenze Creative Applications of Deep Learning with TensorFlow II
Kadenze 6.S191: Introduction to Deep Learning
Massachusetts Institute of Technology via Independent Learn TensorFlow and deep learning, without a Ph.D.
Google via Independent