Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
Offered By: LLMOps Space via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of supervised fine-tuning and instruction tuning for Large Language Models (LLMs) in this comprehensive 54-minute video from LLMOps Space. Delve into specialized fine-tuning techniques for adapting LLMs to niche tasks using labeled data, and learn how to enhance LLM capabilities through instruction tuning. Discover the process of dataset preparation for effective instruction tuning, and gain insights into optimizing memory and speed using the BitsAndBytes library for model quantization. Understand the benefits of the PEFT library from Hugging Face and the role of LoRA in fine-tuning. Explore the functionalities of the TRL (Transformers Reinforcement Learning) library. Watch as Harpreet from Deci AI demonstrates how to fine-tune LLMs using Hugging Face SFT and QLoRA techniques, bridging the gap between model objectives and user-specific requirements.
Syllabus
Fine-tuning LLMs with Hugging Face SFT | QLoRA | LLMOps
Taught by
LLMOps Space
Related Courses
Fine-Tuning LLM Models - Generative AI CoursefreeCodeCamp Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube Fine-tuning Language Models for Structured Responses with QLoRa - Lecture
Trelis Research via YouTube Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ Dataset
Venelin Valkov via YouTube