Full Fine-tuning LLMs with Lower VRAM: Optimizers, GaLore, and Advanced Techniques
Offered By: Trelis Research via YouTube
Course Description
Overview
Syllabus
LLM Full fine-tuning with lower VRAM
Video Overview
Understanding Optimisers
Stochastic Gradient Descent SGD
AdamW Optimizer and VRAM requirements
AdamW 8-bit optimizer
Adafactor optimiser and memory requirements
GaLore - reducing gradient and optimizer VRAM
LoRA versus GaLoRe
Better and Faster GaLoRe via Subspace Descent
Layerwise gradient updates
Training Scripts
How gradient checkpointing works to reduce memory
AdamW Performance
AdamW 8bit Performance
Adafactor with manual learning rate and schedule
Adafactor with default/auto learning rate
Galore AdamW
Galore AdamW with Subspace descent
Using AdamW8bit and Adafactor with GaLoRe
Notebook demo of layerwise gradient updates
Running with LoRa
Inferencing and Pushing Models to Hub
Single GPU Recommendations
Multi-GPU Recommendations
Resources
Taught by
Trelis Research
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube