Fine-tuning Large Language Models (LLMs) with Example Code
Offered By: Shaw Talebi via YouTube
Course Description
Overview
Learn how to fine-tune large language models (LLMs) for specific use cases in this comprehensive video tutorial. Explore the concept of fine-tuning, its importance, and three different approaches to the process. Follow a step-by-step guide for supervised fine-tuning, including three parameter tuning options with a focus on Low-Rank Adaptation (LoRA). Dive into a practical example with Python code, covering base model loading, data preparation, model evaluation, and fine-tuning using LoRA. Access additional resources, including a series playlist, blog post, example code, and relevant research papers to deepen your understanding of LLM fine-tuning techniques.
Syllabus
Intro -
What is Fine-tuning? -
Why Fine-tune -
3 Ways to Fine-tune -
Supervised Fine-tuning in 5 Steps -
3 Options for Parameter Tuning -
Low-Rank Adaptation LoRA -
Example code: Fine-tuning an LLM with LoRA -
Load Base Model -
Data Prep -
Model Evaluation -
Fine-tuning with LoRA -
Fine-tuned Model -
Taught by
Shaw Talebi
Related Courses
Generative AI Engineering and Fine-Tuning TransformersIBM via Coursera Lessons From Fine-Tuning Llama-2
Anyscale via YouTube The Next Million AI Apps - Developing Custom Models for Specialized Tasks
MLOps.community via YouTube LLM Fine-Tuning - Explained
CodeEmporium via YouTube Fine-tuning Large Models on Local Hardware Using PEFT and Quantization
EuroPython Conference via YouTube