YoVDO

Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training

Offered By: Data Centric via YouTube

Tags

LoRA (Low-Rank Adaptation) Courses Text Classification Courses Fine-Tuning Courses RoBERTa Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to utilize LoRA (Low Rank Adapters) for parameter-efficient fine-tuning of large language models in this 47-minute video. Follow along as the instructor demonstrates fine-tuning RoBERTa to classify consumer finance complaints using Google Colab with a V100 GPU. Gain insights into the end-to-end process, including access to a detailed notebook and technical blog. Discover how to optimize your GPU usage while achieving effective model fine-tuning. Explore additional resources on building LLM-powered applications, understanding precision and recall, and booking consultations for further guidance.

Syllabus

Fine-tune your LLMs, Without Maxing out Your GPU!


Taught by

Data Centric

Related Courses

Multi-Label Classification on Unhealthy Comments - Finetuning RoBERTa with PyTorch - Coding Tutorial
rupert ai via YouTube
Hugging Face Transformers - The Basics - Practical Coding Guides - NLP Models (BERT/RoBERTa)
rupert ai via YouTube
Programming Language of the Future: AI in Your Native Language
Linux Foundation via YouTube
Pre-training and Pre-trained Models in Advanced NLP - Lecture 5
Graham Neubig via YouTube
MLOps: OpenVino Quantized Pipeline for Grammatical Error Correction
The Machine Learning Engineer via YouTube