MLOPS LLMs: Converting Microsoft Phi3 to GGUF Format with LLaMA.cpp
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to quantize the Microsoft Phi3 model using LLaMA.cpp in this 23-minute tutorial video. Explore the process of transforming the model into various formats including bf16, fp16, and q8_0. Follow along with practical demonstrations and access the accompanying notebooks on GitHub to enhance your understanding of model quantization techniques in machine learning and data science.
Syllabus
MLOPS LLM,s: Convert Microsoft Phi3 to GGUF format with LLaMA.cpp #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomouslykasukanra via YouTube Fine-Tuning a Local Mistral 7B Model - Step-by-Step Guide
All About AI via YouTube No More Runtime Setup - Bundling, Distributing, Deploying, and Scaling LLMs Seamlessly with Ollama Operator
CNCF [Cloud Native Computing Foundation] via YouTube Running LLMs in the Cloud - Approaches and Best Practices
CNCF [Cloud Native Computing Foundation] via YouTube Running LLMs in the Cloud - Approaches and Best Practices
CNCF [Cloud Native Computing Foundation] via YouTube