LoRA Explained - Efficient Fine-Tuning of Large Language Models
Offered By: Unify via YouTube
Course Description
Overview
Explore the innovative LoRA (Low-Rank Adaptation) technique in this 30-minute video from Unify. Discover how LoRA enables efficient fine-tuning of large language models by freezing pre-trained weights and introducing trainable low-rank matrix decompositions into transformer layers. Learn about the significant reduction in trainable parameters required for task-specific adaptation and its implications for improving machine learning models. Gain insights from the original research paper and access additional resources, including AI research newsletters, blogs on AI deployment, and various platforms to connect with the Unify community.
Syllabus
LoRA Explained
Taught by
Unify
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent