Fine-tuning Llama 2 for Tone or Style Using Shakespeare Dataset
Offered By: Trelis Research via YouTube
Course Description
Overview
Learn how to fine-tune the Llama 2 language model for tone or style using a custom dataset in this 18-minute video tutorial. Explore the process of adapting the model to mimic Shakespearean language as an example. Discover techniques for loading Llama 2 with bitsandbytes, implementing LoRA for efficient fine-tuning, and selecting appropriate target modules. Gain insights into setting optimal training parameters, including batch size, gradient accumulation, and warm-up settings. Master the use of AdamW optimizer and learn to evaluate training loss effectively. Troubleshoot common issues in Google Colab and run inference with your newly fine-tuned model. Access additional resources for embedding creation, supervised fine-tuning, and advanced scripts to enhance your language model customization skills.
Syllabus
How to fine tune on a custom dataset
What dataset should I use for fine-tuning?
Fine-tuning in Google Colab
Loading Llama 2 with bitsandbytes
Fine-tuning with LoRA
Target modules for fine-tuning
Loading data for fine-tuning
Training Llama 2 with a validation set
Setting training parameters for fine-tuning
Choosing batch size for training
Setting gradient accumulation for training
Using an eval dataset for training
Setting warm-up parameters for training
Using AdamW for optimisation
Fix for when commands don't work in Colab
Evaluating training loss
Running inference after training
Taught by
Trelis Research
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube