LLMOps: Using Nvidia TensorRT SDK for GPU Inference
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore the process of converting a model to Tensor-RT format in this 34-minute video tutorial. Learn how to compare throughput and inference time by varying batch size and data precision, using both native PyTorch inference and TensorRT runtime. Gain practical insights into optimizing GPU inference for machine learning models, with a focus on LLMOps techniques. Access the accompanying notebook on GitHub to follow along and implement the demonstrated concepts in your own projects.
Syllabus
LLMOps: How to use Nvidia TensorRT SDK for GPU Inference #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
Digital Signal ProcessingÉcole Polytechnique Fédérale de Lausanne via Coursera Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera