LoRA Explained - Efficient Fine-Tuning of Large Language Models
Offered By: Unify via YouTube
Course Description
Overview
Explore the innovative LoRA (Low-Rank Adaptation) technique in this 30-minute video from Unify. Discover how LoRA enables efficient fine-tuning of large language models by freezing pre-trained weights and introducing trainable low-rank matrix decompositions into transformer layers. Learn about the significant reduction in trainable parameters required for task-specific adaptation and its implications for improving machine learning models. Gain insights from the original research paper and access additional resources, including AI research newsletters, blogs on AI deployment, and various platforms to connect with the Unify community.
Syllabus
LoRA Explained
Taught by
Unify
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX