Optimize TensorFlow Models For Deployment with TensorRT
Offered By: Coursera Project Network via Coursera
Course Description
Overview
This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput.
Prerequisites:
In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API.
Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Syllabus
- Optimize TensorFlow Models For Deployment with TensorRT
- Welcome to this is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput.
Taught by
Snehan Kekre
Related Courses
AWS Certified Machine Learning - Specialty (LA)A Cloud Guru Google Cloud AI Services Deep Dive
A Cloud Guru Introduction to Machine Learning
A Cloud Guru Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn Advanced Artificial Intelligence on Microsoft Azure: Deep Learning, Reinforcement Learning and Applied AI
Cloudswyft via FutureLearn