YoVDO

Local LLM Fine-tuning on Mac (M1 16GB) Using QLoRA and MLX

Offered By: Shaw Talebi via YouTube

Tags

Machine Learning Courses Inference Courses QLoRA Courses Mistral 7B Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of fine-tuning a large language model (LLM) locally on an M-series Mac in this comprehensive tutorial video. Learn how to adapt Mistral 7b to respond to YouTube comments in the presenter's style. Dive into topics including the motivation behind local fine-tuning, an introduction to MLX, setting up the environment, and working with example code. Gain hands-on experience with inference using both un-finetuned and finetuned models, understand the QLoRA fine-tuning technique, and discover the intricacies of dataset formatting. Follow along as the presenter demonstrates running local training and provides insights on LoRA rank. Access additional resources, including a blog post, GitHub repository, and related videos to further enhance your understanding of LLM fine-tuning on Mac.

Syllabus

Intro -
Motivation -
MLX -
GitHub Repo -
Setting up environment -
Example Code -
Inference with un-finetuned model -
Fine-tuning with QLoRA -
Aside: dataset formatting -
Running local training -
Inference with finetuned model -
Note on LoRA rank -


Taught by

Shaw Talebi

Related Courses

Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ Dataset
Venelin Valkov via YouTube
Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube
Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube
Generative AI: Fine-Tuning LLM Models Crash Course
Krish Naik via YouTube
Aligning Open Language Models - Stanford CS25 Lecture
Stanford University via YouTube