YoVDO

Fine-Tuning LLMs with PEFT and LoRA

Offered By: Sam Witteveen via YouTube

Tags

LLM (Large Language Model) Courses LoRA (Low-Rank Adaptation) Courses Hugging Face Courses PEFT Courses

Course Description

Overview

Explore the process of fine-tuning Large Language Models (LLMs) using Parameter-Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) in this informative video. Learn about the challenges of traditional fine-tuning methods and discover how PEFT addresses these issues. Delve into the LoRA technique, examining its diagram and understanding its implementation. Get acquainted with the Hugging Face PEFT Library and follow along with a detailed code walkthrough. Gain practical insights on how to fine-tune decoder-style GPT models efficiently and upload the results to the HuggingFace Hub. Access additional resources, including a LoRA Colab notebook and relevant blog posts, to further enhance your understanding of these advanced fine-tuning techniques.

Syllabus

Intro
- Problems with fine-tuning
- Introducing PEFT
- PEFT other cool techniques
- LoRA Diagram
- Hugging Face PEFT Library
- Code Walkthrough


Taught by

Sam Witteveen

Related Courses

Google BARD and ChatGPT AI for Increased Productivity
Udemy
Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube
Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube
Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube
OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube