YoVDO

LLMOps: Using Nvidia TensorRT SDK for GPU Inference

Offered By: The Machine Learning Engineer via YouTube

Tags

TensorRT Courses Machine Learning Courses Deep Learning Courses PyTorch Courses Quantization Courses Model Optimization Courses Batch Processing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of converting a model to Tensor-RT format in this 34-minute video tutorial. Learn how to compare throughput and inference time by varying batch size and data precision, using both native PyTorch inference and TensorRT runtime. Gain practical insights into optimizing GPU inference for machine learning models, with a focus on LLMOps techniques. Access the accompanying notebook on GitHub to follow along and implement the demonstrated concepts in your own projects.

Syllabus

LLMOps: How to use Nvidia TensorRT SDK for GPU Inference #datascience #machinelearning


Taught by

The Machine Learning Engineer

Related Courses

Deep Learning with Python and PyTorch.
IBM via edX
Introduction to Machine Learning
Duke University via Coursera
How Google does Machine Learning em Português Brasileiro
Google Cloud via Coursera
Intro to Deep Learning with PyTorch
Facebook via Udacity
Secure and Private AI
Facebook via Udacity