LLMOps: Using Nvidia TensorRT SDK for GPU Inference
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore the process of converting a model to Tensor-RT format in this 34-minute video tutorial. Learn how to compare throughput and inference time by varying batch size and data precision, using both native PyTorch inference and TensorRT runtime. Gain practical insights into optimizing GPU inference for machine learning models, with a focus on LLMOps techniques. Access the accompanying notebook on GitHub to follow along and implement the demonstrated concepts in your own projects.
Syllabus
LLMOps: How to use Nvidia TensorRT SDK for GPU Inference #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
3D-печать для всех и каждогоTomsk State University via Coursera Developing a Multidimensional Data Model
Microsoft via edX Launching into Machine Learning 日本語版
Google Cloud via Coursera Art and Science of Machine Learning 日本語版
Google Cloud via Coursera Launching into Machine Learning auf Deutsch
Google Cloud via Coursera