YoVDO

Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ Dataset

Offered By: Venelin Valkov via YouTube

Tags

LLM (Large Language Model) Courses Machine Learning Courses LoRA (Low-Rank Adaptation) Courses Model Evaluation Courses Tensorboard Courses QLoRA Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune the Falcon 7b Large Language Model on a custom dataset of chatbot customer support FAQs using QLoRA. Explore the process of loading the model, implementing a LoRA adapter, and conducting fine-tuning. Monitor training progress with TensorBoard and compare the performance of untrained and trained models by evaluating responses to various prompts. Gain insights into working with free-to-use LLMs for research and commercial purposes, and discover techniques for adapting powerful language models to specific tasks using limited computational resources.

Syllabus

- Introduction
- Text Tutorial on MLExpert.io
- Falcon LLM
- Google Colab Setup
- Dataset
- Load Falcon 7b and QLoRA Adapter
- Try the Model Before Training
- HuggingFace Dataset
- Training
- Save the Trained Model
- Load the Trained Model
- Evaluation
- Conclusion


Taught by

Venelin Valkov

Related Courses

Google BARD and ChatGPT AI for Increased Productivity
Udemy
Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube
Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube
Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube
OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube