Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
Offered By: LLMOps Space via YouTube
Course Description
Overview
Explore the intricacies of supervised fine-tuning and instruction tuning for Large Language Models (LLMs) in this comprehensive 54-minute video from LLMOps Space. Delve into specialized fine-tuning techniques for adapting LLMs to niche tasks using labeled data, and learn how to enhance LLM capabilities through instruction tuning. Discover the process of dataset preparation for effective instruction tuning, and gain insights into optimizing memory and speed using the BitsAndBytes library for model quantization. Understand the benefits of the PEFT library from Hugging Face and the role of LoRA in fine-tuning. Explore the functionalities of the TRL (Transformers Reinforcement Learning) library. Watch as Harpreet from Deci AI demonstrates how to fine-tune LLMs using Hugging Face SFT and QLoRA techniques, bridging the gap between model objectives and user-specific requirements.
Syllabus
Fine-tuning LLMs with Hugging Face SFT | QLoRA | LLMOps
Taught by
LLMOps Space
Related Courses
Big Self-Supervised Models Are Strong Semi-Supervised LearnersYannic Kilcher via YouTube A Transformer-Based Framework for Multivariate Time Series Representation Learning
Launchpad via YouTube Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Krish Naik via YouTube Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube