Fine-tuning Tiny LLM for Sentiment Analysis - TinyLlama and LoRA on a Single GPU
Offered By: Venelin Valkov via YouTube
Course Description
Overview
Learn how to fine-tune a small Language Model (LLM) like Phi-2 or TinyLlama to potentially improve its performance on custom datasets. Explore the process of setting up a dataset, model, tokenizer, and LoRA adapter for sentiment analysis. Follow along as the video demonstrates training TinyLlama on a single GPU with custom data, evaluating predictions, and understanding the fine-tuning process step-by-step. Gain insights into preparing datasets, configuring models and tokenizers, managing token counts, implementing LoRA for efficient fine-tuning, interpreting training results, performing inference with the trained model, and conducting evaluations to assess performance improvements.
Syllabus
- Intro
- Text tutorial on MLExpert
- Why fine-tuning Tiny LLM?
- Prepare the dataset
- Model & tokenizer setup
- Token counts
- Fine-tuning with LoRA
- Training results & saving the model
- Inference with the trained model
- Evaluation
- Conclusion
Taught by
Venelin Valkov
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube