Fine-Tuning and Deploying LLaMA 3.1 on Hugging Face and Ollama
Offered By: Mervin Praison via YouTube
Course Description
Overview
Learn how to fine-tune the LLaMA 3.1 AI model using custom data in this comprehensive 15-minute video tutorial. Follow a step-by-step guide to train the 8 billion parameter model, save it on Hugging Face, and deploy it on Ollama. Discover the importance of fine-tuning for custom data, evaluate pre-training and post-training model performance, and master the process of configuring, loading data, and training with SFT Trainer. Gain insights into creating GGUF format, developing Ollama Modelfiles, and testing the model within the Ollama environment. Perfect for businesses aiming to leverage AI with private data, this tutorial offers practical knowledge on creating a custom AI model tailored to specific needs, with enhanced performance and reduced memory usage.
Syllabus
- Introduction to LLaMA 3.1 fine-tuning
- Overview of the video content
- Configuration
- Loading the dataset
- Training the model
- Saving the model
- Running the code and observing results
- Saving the model to Ollama
- Creating GGUF format
- Creating Ollama Modelfile
- Creating the model in Ollama
- Testing the model with Ollama
- Pushing the model to Ollama
- Final steps and conclusion
Taught by
Mervin Praison
Related Courses
How to Quantize a Large Language Model with GGUF or AWQTrelis Research via YouTube MLOps: Comparing Microsoft Phi3 Mini 128k in GGUF, MLFlow, and ONNX Formats
The Machine Learning Engineer via YouTube MLOps with MLFlow: Comparing Microsoft Phi3 Mini 128k in GGUF, MLFlow, and ONNX Formats
The Machine Learning Engineer via YouTube MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
The Machine Learning Engineer via YouTube MLOps: Saving and Loading Microsoft Phi3 Mini 128k in GGUF Format with MLflow
The Machine Learning Engineer via YouTube