YoVDO

Fine-tuning Tiny LLM for Sentiment Analysis - TinyLlama and LoRA on a Single GPU

Offered By: Venelin Valkov via YouTube

Tags

Machine Learning Courses Deep Learning Courses Sentiment Analysis Courses LoRA (Low-Rank Adaptation) Courses GPU Computing Courses Model Evaluation Courses Language Models Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune a small Language Model (LLM) like Phi-2 or TinyLlama to potentially improve its performance on custom datasets. Explore the process of setting up a dataset, model, tokenizer, and LoRA adapter for sentiment analysis. Follow along as the video demonstrates training TinyLlama on a single GPU with custom data, evaluating predictions, and understanding the fine-tuning process step-by-step. Gain insights into preparing datasets, configuring models and tokenizers, managing token counts, implementing LoRA for efficient fine-tuning, interpreting training results, performing inference with the trained model, and conducting evaluations to assess performance improvements.

Syllabus

- Intro
- Text tutorial on MLExpert
- Why fine-tuning Tiny LLM?
- Prepare the dataset
- Model & tokenizer setup
- Token counts
- Fine-tuning with LoRA
- Training results & saving the model
- Inference with the trained model
- Evaluation
- Conclusion


Taught by

Venelin Valkov

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX