MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to log and load a quantized LLama.cpp model in MLflow through this 18-minute tutorial video. Create a Python class, log it in MLflow, and load it for inference using a Microsoft Phi3 mini 128k model quantized with llama.cpp into int8. Follow along with the provided notebook to gain hands-on experience in implementing MLOps practices for machine learning and data science projects.
Syllabus
MLOPS MLFlow :Log and Load in MLflow Microsoft Phi3 mini 128k in GGUF #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Discrete Inference and Learning in Artificial VisionÉcole Centrale Paris via Coursera Teaching Literacy Through Film
The British Film Institute via FutureLearn Linear Regression and Modeling
Duke University via Coursera Probability and Statistics
Stanford University via Stanford OpenEdx Statistical Reasoning
Stanford University via Stanford OpenEdx