LLMOps: Quantization Models and Inference with ONNX Generative Runtime
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore the world of LLMOps through a 30-minute video focusing on quantization models and inference using ONNX Generative Runtime. Learn how to install ONNX runtime with GPU support and perform inference with a generative model, specifically using a Phi3-mini-4k quantized to 4int. Dive into the process of converting an original Phi3-mini-128k into a 4int quantized version using the ONNX runtime. Access the accompanying notebook on GitHub to follow along and gain hands-on experience in this cutting-edge area of data science and machine learning.
Syllabus
LLMOps: Quantization models & Inference ONNX Generative Runtime #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
Learning Machine Learning with .NET, PyTorch and the ONNX RuntimeMicrosoft via YouTube Using Apache OpenNLP with OpenSearch K-NN Vector Search
Linux Foundation via YouTube Accelerating High-Performance Machine Learning at Scale in Kubernetes
CNCF [Cloud Native Computing Foundation] via YouTube LLMs Fine Tuning and Inferencing Using ONNX Runtime - Workshop
Linux Foundation via YouTube Real-Time Inference of Neural Networks: A Guide for DSP Engineers
ADC - Audio Developer Conference via YouTube