YoVDO

Optimize TensorFlow Models For Deployment with TensorRT

Offered By: Coursera Project Network via Coursera

Tags

TensorFlow Courses Deep Learning Courses Performance Tuning Courses Model Optimization Courses TensorRT Courses

Course Description

Overview

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Syllabus

  • Optimize TensorFlow Models For Deployment with TensorRT
    • Welcome to this is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput.

Taught by

Snehan Kekre

Related Courses

Jetson Xavier NX Developer Kit - Edge AI Supercomputer Features and Applications
Nvidia via YouTube
NVIDIA Jetson: Enabling AI-Powered Autonomous Machines at Scale
Nvidia via YouTube
Jetson AGX Xavier: Architecture and Applications for Autonomous Machines
Nvidia via YouTube
Streamline Deep Learning for Video Analytics with DeepStream SDK 2.0
Nvidia via YouTube
Inference and Quantization for AI - Session 3
Nvidia via YouTube