Chat Fine-Tuning for LLMs - Instruction Format, Datasets, and Implementation
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore the intricacies of chat fine-tuning in this 20-minute video from Trelis Research. Dive into the world of LLM instruction fine-tuning, understanding its importance and the technicalities involved. Learn about special tokens, stop tokens, and various prompt formats including Guanaco, Llama 2, and chatml. Discover instruction fine-tuning datasets and witness a practical demonstration of fine-tuning a model for chat using Jupyter lab. Gain insights from the results and benefit from pro tips shared by experts. Access additional resources including presentation slides, a Runpod affiliate link, and a Llama 2 instruction fine-tuning dataset. For those seeking advanced fine-tuning capabilities, explore the option to purchase access to comprehensive scripts and notebooks for unsupervised and supervised fine-tuning, dataset preparation, embeddings creation, and quantization.
Syllabus
LLM instruction fine-tuning
Why do instruction fine-tuning?
Understanding special tokens and stop tokens
Instruction fine-tuning format
Guanaco instruction fine-tuning
Llama 2 prompt format
chatml prompt format
Instruction fine-tuning datasets
Fine-tuning a model for chat in Jupyter lab
Chat fine-tuning results
Pro tips
Taught by
Trelis Research
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy