Very Few Parameter Fine-Tuning with ReFT and LoRA
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore advanced techniques for fine-tuning large language models with minimal parameters in this 55-minute video from Trelis Research. Delve into the intricacies of ReFT (Representation Fine-Tuning) and LoRA (Low-Rank Adaptation) methodologies, starting with a comprehensive review of transformer architecture. Learn the practical aspects of weight fine-tuning using LoRA, followed by an in-depth look at Representation Fine-tuning. Compare these two approaches and understand their respective strengths. Get hands-on experience with step-by-step walkthroughs for both LoRA and ReFT fine-tuning processes, including GPU setup considerations. Discover techniques for combining ReFT fine-tunes and explore the concept of orthogonality in fine-tuning. Gain insights into the limitations of LoReFT and LoRA fine-tuning, and conclude with valuable tips to enhance your fine-tuning skills. Access additional resources, including complete scripts, one-click fine-tuning templates, and community support to further your learning journey.
Syllabus
ReFT and LoRA Fine-tuning with few parameters
Video Overview
Transformer Architecture Review
Weight fine-tuning with LoRA
Representation Fine-tuning ReFT
Comparing LoRA with ReFT
Fine-tuning GPU setup
LoRA Fine-tuning walk-through
ReFT fine-tuning walk through
Combining ReFT fine-tunes
Orthogonality and combining fine-tunes
Limitations of LoReFT and LoRA fine-tuning
Fine-tuning tips
Taught by
Trelis Research
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera Practical Deep Learning For Coders
fast.ai via Independent GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera Getting Started with PyTorch
Coursera Project Network via Coursera