LoRA Fine-tuning Explained - Choosing Parameters and Optimizations
Offered By: Trelis Research via YouTube
Course Description
Overview
Dive into a comprehensive video tutorial on LoRA fine-tuning for machine learning models. Explore recent developments in Mistral v0.3 and Phi-3 models before delving into full fine-tuning techniques. Learn the intricacies of LoRA, including how to select optimal alpha and rank parameters. Discover strategies for choosing fine-tuning parameters such as learning rate, schedule, and batch size. Gain insights into advanced optimizations like rank stabilized LoRA, loftQ, and LoRA+. Follow along with a practical demonstration using SFTTrainer from TRL to run training sessions. Access additional resources and support, as well as a companion notebook, to enhance your understanding of LoRA fine-tuning techniques.
Syllabus
Welcome
Mistral v0.3
Phi-3 models
Full fine-tuning
LoRA
Picking LoRA alpha and rank
Running training with SFTTrainer from TRL
Taught by
Trelis Research
Related Courses
Fine-tuning Phi-3 for LeetCode: Dataset Generation and Unsloth ImplementationAll About AI via YouTube LLM News: GPT-4, Project Astra, Veo, Copilot+ PCs, Gemini 1.5 Flash, and Chameleon
Elvis Saravia via YouTube LLM Tool Use - GPT4o-mini, Groq, and Llama.cpp
Trelis Research via YouTube Comparing LLAMA 3, Phi 3, and GPT-3.5 Turbo AI Agents for Web Search Performance
Data Centric via YouTube Building an Orca Mini and Phi 3 Chatbot with Python on Raspberry Pi 5
Arm Software Developers via YouTube