Fine-tuning Language Models for Structured Responses with QLoRa - Lecture
Offered By: Trelis Research via YouTube
Course Description
Overview
Learn how to fine-tune language models for structured responses using QLoRa in this comprehensive video lecture. Explore techniques for generating function calls, JSON objects, and arrays. Access lecture notes and a free Google Colab notebook for basic training. Discover advanced training options for improved performance, including prompt loss-mask and stop token implementation. Gain insights into model size, quantization, data setup, training processes, inference, and saving models. Explore resources for function calling datasets and pre-trained Llama 2 models with function calling capabilities. Dive into advanced fine-tuning techniques and attention mechanisms to enhance your language model skills.
Syllabus
Understanding Model Size
Quantization
Loading and Setting Up a Training Notebook
Data Setup and Selection
Training Process
Inference and Prediction
Saving and Push the Model to the Hub
ADVANCED Fine-tuning and Attention tutorial
Taught by
Trelis Research
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube