YoVDO

Fine-Tuning LLMs with PEFT and LoRA

Offered By: Sam Witteveen via YouTube

Tags

LLM (Large Language Model) Courses LoRA (Low-Rank Adaptation) Courses Hugging Face Courses PEFT Courses

Course Description

Overview

Explore the process of fine-tuning Large Language Models (LLMs) using Parameter-Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) in this informative video. Learn about the challenges of traditional fine-tuning methods and discover how PEFT addresses these issues. Delve into the LoRA technique, examining its diagram and understanding its implementation. Get acquainted with the Hugging Face PEFT Library and follow along with a detailed code walkthrough. Gain practical insights on how to fine-tune decoder-style GPT models efficiently and upload the results to the HuggingFace Hub. Access additional resources, including a LoRA Colab notebook and relevant blog posts, to further enhance your understanding of these advanced fine-tuning techniques.

Syllabus

Intro
- Problems with fine-tuning
- Introducing PEFT
- PEFT other cool techniques
- LoRA Diagram
- Hugging Face PEFT Library
- Code Walkthrough


Taught by

Sam Witteveen

Related Courses

Hugging Face on Azure - Partnership and Solutions Announcement
Microsoft via YouTube
Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube
Open Source Platforms for MLOps
Duke University via Coursera
Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube
Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube