Custom and Distributed Training with TensorFlow
Offered By: DeepLearning.AI via Coursera
Course Description
Overview
In this course, you will:
• Learn about Tensor objects, the fundamental building blocks of TensorFlow, understand the difference between the eager and graph modes in TensorFlow, and learn how to use a TensorFlow tool to calculate gradients.
• Build your own custom training loops using GradientTape and TensorFlow Datasets to gain more flexibility and visibility with your model training.
• Learn about the benefits of generating code that runs in graph mode, take a peek at what graph code looks like, and practice generating this more efficient code automatically with TensorFlow’s tools.
• Harness the power of distributed training to process more data and train larger models, faster, get an overview of various distributed training strategies, and practice working with a strategy that trains on multiple GPU cores, and another that trains on multiple TPU cores.
The DeepLearning.AI TensorFlow: Advanced Techniques Specialization introduces the features of TensorFlow that provide learners with more control over their model architecture and tools that help them create and train advanced ML models.
This Specialization is for early and mid-career software and machine learning engineers with a foundational understanding of TensorFlow who are looking to expand their knowledge and skill set by learning advanced TensorFlow features to build powerful models.
Syllabus
- Differentiation and Gradients
- This week, you will get a detailed look at the fundamental building blocks of TensorFlow - tensor objects. For example, you will be able to describe the difference between eager mode and graph mode in TensorFlow, and explain why eager mode is very user friendly for you as a developer. You will also use TensorFlow tools to calculate gradients so that you don’t have to look for your old calculus textbooks next time you need to get a gradient!
- Custom Training
- This week, you will build custom training loops using GradientTape and TensorFlow Datasets. Being able to write your own training loops will give you more flexibility and visibility with your model training. You will also use a function to calculate the derivatives of functions so that you don’t have to look to your old calculus textbooks to calculate gradients.
- Graph Mode
- This week, you’ll learn about the benefits of generating code that runs in “graph mode”. You’ll take a peek at what graph code looks like, and you’ll practice generating this more efficient code automatically with TensorFlow’s tools, so that you don’t have to write the graph code yourself!
- Distributed Training
- This week, you will harness the power of distributed training to process more data and train larger models, faster. You’ll get an overview of various distributed training strategies and then practice working with two strategies, one that trains on multiple GPU cores, and the other that trains on multiple TPU cores. Get your cape ready, because you’re going to get some superpowers this week!
Taught by
Laurence Moroney and Eddy Shyu
Related Courses
AWS Certified Machine Learning - Specialty (LA)A Cloud Guru Google Cloud AI Services Deep Dive
A Cloud Guru Introduction to Machine Learning
A Cloud Guru Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn Advanced Artificial Intelligence on Microsoft Azure: Deep Learning, Reinforcement Learning and Applied AI
Cloudswyft via FutureLearn