Fine-tuning Language Models for Structured Responses with QLoRa - Lecture
Offered By: Trelis Research via YouTube
Course Description
Overview
Learn how to fine-tune language models for structured responses using QLoRa in this comprehensive video lecture. Explore techniques for generating function calls, JSON objects, and arrays. Access lecture notes and a free Google Colab notebook for basic training. Discover advanced training options for improved performance, including prompt loss-mask and stop token implementation. Gain insights into model size, quantization, data setup, training processes, inference, and saving models. Explore resources for function calling datasets and pre-trained Llama 2 models with function calling capabilities. Dive into advanced fine-tuning techniques and attention mechanisms to enhance your language model skills.
Syllabus
Understanding Model Size
Quantization
Loading and Setting Up a Training Notebook
Data Setup and Selection
Training Process
Inference and Prediction
Saving and Push the Model to the Hub
ADVANCED Fine-tuning and Attention tutorial
Taught by
Trelis Research
Related Courses
Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ DatasetVenelin Valkov via YouTube Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube Generative AI: Fine-Tuning LLM Models Crash Course
Krish Naik via YouTube Aligning Open Language Models - Stanford CS25 Lecture
Stanford University via YouTube