YoVDO

MLOps: Comparing Microsoft Phi3 Mini 128k in GGUF, MLFlow, and ONNX Formats

Offered By: The Machine Learning Engineer via YouTube

Tags

MLOps Courses Data Science Courses Machine Learning Courses MLFlow Courses ONNX Courses GGUF Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the Microsoft Phi3 Mini 128k model and compare inference performance across different formats and quantization methods in this 45-minute video tutorial. Learn how to work with MLFlow, GGUF, and ONNX formats while examining their impact on inference time and precision. Follow along with provided notebooks to implement MLFlow quantization with bfloat16, Llama.cpp quantization with float16 in GGUF format, ONNX CPU quantization with int4, and ONNX GPU DirectML quantization with int4. Gain insights into defining input and output parameters, managing artifacts, and flowing the model through various frameworks. Conclude with a comprehensive understanding of the performance differences between these approaches for deploying the Phi3 mini 128k model in machine learning and data science applications.

Syllabus

Intro
Phi3 mini 128k
Defining input and output parameters
Defining artifacts
Flowing the model
MLFlow notebook
MLFlow model
ONNX model
ONNX performance
DirectML
Microsoft ONNX
Conclusion


Taught by

The Machine Learning Engineer

Related Courses

Introduction to MLOps on Azure
A Cloud Guru
Introduction to AI/ML Toolkits with Kubeflow
Linux Foundation via edX
AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps)
Amazon Web Services via AWS Skill Builder
AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps) (Simplified Chinese)
Amazon Web Services via AWS Skill Builder
AWS ML Engineer Associate 3.3 Automate Deployment
Amazon Web Services via AWS Skill Builder