YoVDO

Fine-Tuning LLMs with PEFT and LoRA

Offered By: Sam Witteveen via YouTube

Tags

LLM (Large Language Model) Courses LoRA (Low-Rank Adaptation) Courses Hugging Face Courses PEFT Courses

Course Description

Overview

Explore the process of fine-tuning Large Language Models (LLMs) using Parameter-Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) in this informative video. Learn about the challenges of traditional fine-tuning methods and discover how PEFT addresses these issues. Delve into the LoRA technique, examining its diagram and understanding its implementation. Get acquainted with the Hugging Face PEFT Library and follow along with a detailed code walkthrough. Gain practical insights on how to fine-tune decoder-style GPT models efficiently and upload the results to the HuggingFace Hub. Access additional resources, including a LoRA Colab notebook and relevant blog posts, to further enhance your understanding of these advanced fine-tuning techniques.

Syllabus

Intro
- Problems with fine-tuning
- Introducing PEFT
- PEFT other cool techniques
- LoRA Diagram
- Hugging Face PEFT Library
- Code Walkthrough


Taught by

Sam Witteveen

Related Courses

Large Language Models: Foundation Models from the Ground Up
Databricks via edX
Pre-training and Fine-tuning of Code Generation Models
CNCF [Cloud Native Computing Foundation] via YouTube
MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow
The Machine Learning Engineer via YouTube
MLOps MLflow: Fine-Tuning Mistral 7B con PEFT y QLora - EspaƱol
The Machine Learning Engineer via YouTube
MLOps: PEFT Dialog Summarization with Flan T5 Using LoRA
The Machine Learning Engineer via YouTube