Fine-tuning Optimizations - DoRA, NEFT, LoRA+, and Unsloth
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore advanced fine-tuning optimization techniques for large language models in this comprehensive video tutorial. Delve into the intricacies of LoRA (Low-Rank Adaptation) and its improvements, including DoRA (Double-Rank Adaptation), NEFT (Noisy Embeddings for Fine-Tuning), LoRA+, and Unsloth. Learn how these methods work, their advantages, and practical implementations through detailed explanations and notebook walk-throughs. Compare the effectiveness of each technique and gain insights on choosing the best approach for your fine-tuning needs. Access provided resources, including GitHub repositories, slides, and research papers, to further enhance your understanding and application of these cutting-edge optimization strategies.
Syllabus
Improving on LoRA
Video Overview
How does LoRA work?
Understanding DoRA
NEFT - Adding Noise to Embeddings
LoRA Plus
Unsloth for fine-tuning speedups
Comparing LoRA+, Unsloth, DoRA, NEFT
Notebook Setup and LoRA
DoRA Notebook Walk-through
NEFT Notebook Example
LoRA Plus
Unsloth
Final Recommendation
Taught by
Trelis Research
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube