MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to log and load a quantized LLama.cpp model in MLflow through this 18-minute tutorial video. Create a Python class, log it in MLflow, and load it for inference using a Microsoft Phi3 mini 128k model quantized with llama.cpp into int8. Follow along with the provided notebook to gain hands-on experience in implementing MLOps practices for machine learning and data science projects.
Syllabus
MLOPS MLFlow :Log and Load in MLflow Microsoft Phi3 mini 128k in GGUF #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Digital Signal ProcessingÉcole Polytechnique Fédérale de Lausanne via Coursera Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera