MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to log and load a quantized LLama.cpp model in MLflow through this 18-minute tutorial video. Create a Python class, log it in MLflow, and load it for inference using a Microsoft Phi3 mini 128k model quantized with llama.cpp into int8. Follow along with the provided notebook to gain hands-on experience in implementing MLOps practices for machine learning and data science projects.
Syllabus
MLOPS MLFlow :Log and Load in MLflow Microsoft Phi3 mini 128k in GGUF #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Machine Learning Operations (MLOps): Getting StartedGoogle Cloud via Coursera Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera Demystifying Machine Learning Operations (MLOps)
Pluralsight Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera