YoVDO

Fine-tuning Tiny LLM for Sentiment Analysis - TinyLlama and LoRA on a Single GPU

Offered By: Venelin Valkov via YouTube

Tags

Machine Learning Courses Deep Learning Courses Sentiment Analysis Courses LoRA (Low-Rank Adaptation) Courses GPU Computing Courses Model Evaluation Courses Language Models Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune a small Language Model (LLM) like Phi-2 or TinyLlama to potentially improve its performance on custom datasets. Explore the process of setting up a dataset, model, tokenizer, and LoRA adapter for sentiment analysis. Follow along as the video demonstrates training TinyLlama on a single GPU with custom data, evaluating predictions, and understanding the fine-tuning process step-by-step. Gain insights into preparing datasets, configuring models and tokenizers, managing token counts, implementing LoRA for efficient fine-tuning, interpreting training results, performing inference with the trained model, and conducting evaluations to assess performance improvements.

Syllabus

- Intro
- Text tutorial on MLExpert
- Why fine-tuning Tiny LLM?
- Prepare the dataset
- Model & tokenizer setup
- Token counts
- Fine-tuning with LoRA
- Training results & saving the model
- Inference with the trained model
- Evaluation
- Conclusion


Taught by

Venelin Valkov

Related Courses

TensorFlow: Working with NLP
LinkedIn Learning
Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube
HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube
GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube
How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube