LLMOps: OpenVino Toolkit Quantization 4int LLama 3.2 3B and Inference on CPU
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to convert the LLAMA3.2 3 Billion parameter model to OpenVino IR format and quantize it to 4-bit integer precision. Follow along as the process of model conversion and quantization is demonstrated step-by-step. Discover how to perform inference on a CPU using Chain of Thought (CoT) prompts with the optimized model. Access the accompanying Jupyter notebook for hands-on practice and deeper understanding of the LLMOps techniques covered in this 26-minute tutorial on data science and machine learning.
Syllabus
LLMOps: OpenVino Toolkit quantization 4int LLama3.2 3B, Inference CPU #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
Digital Signal ProcessingÉcole Polytechnique Fédérale de Lausanne via Coursera Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera