Fine-Tuning and Deploying LLaMA 3.1 on Hugging Face and Ollama
Offered By: Mervin Praison via YouTube
Course Description
Overview
Learn how to fine-tune the LLaMA 3.1 AI model using custom data in this comprehensive 15-minute video tutorial. Follow a step-by-step guide to train the 8 billion parameter model, save it on Hugging Face, and deploy it on Ollama. Discover the importance of fine-tuning for custom data, evaluate pre-training and post-training model performance, and master the process of configuring, loading data, and training with SFT Trainer. Gain insights into creating GGUF format, developing Ollama Modelfiles, and testing the model within the Ollama environment. Perfect for businesses aiming to leverage AI with private data, this tutorial offers practical knowledge on creating a custom AI model tailored to specific needs, with enhanced performance and reduced memory usage.
Syllabus
- Introduction to LLaMA 3.1 fine-tuning
- Overview of the video content
- Configuration
- Loading the dataset
- Training the model
- Saving the model
- Running the code and observing results
- Saving the model to Ollama
- Creating GGUF format
- Creating Ollama Modelfile
- Creating the model in Ollama
- Testing the model with Ollama
- Pushing the model to Ollama
- Final steps and conclusion
Taught by
Mervin Praison
Related Courses
The GenAI Stack - From Zero to Database-Backed Support BotDocker via YouTube Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube