Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ Dataset
Offered By: Venelin Valkov via YouTube
Course Description
Overview
Learn how to fine-tune the Falcon 7b Large Language Model on a custom dataset of chatbot customer support FAQs using QLoRA. Explore the process of loading the model, implementing a LoRA adapter, and conducting fine-tuning. Monitor training progress with TensorBoard and compare the performance of untrained and trained models by evaluating responses to various prompts. Gain insights into working with free-to-use LLMs for research and commercial purposes, and discover techniques for adapting powerful language models to specific tasks using limited computational resources.
Syllabus
- Introduction
- Text Tutorial on MLExpert.io
- Falcon LLM
- Google Colab Setup
- Dataset
- Load Falcon 7b and QLoRA Adapter
- Try the Model Before Training
- HuggingFace Dataset
- Training
- Save the Trained Model
- Load the Trained Model
- Evaluation
- Conclusion
Taught by
Venelin Valkov
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube