YoVDO

Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training

Offered By: Data Centric via YouTube

Tags

LoRA (Low-Rank Adaptation) Courses Text Classification Courses Fine-Tuning Courses RoBERTa Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to utilize LoRA (Low Rank Adapters) for parameter-efficient fine-tuning of large language models in this 47-minute video. Follow along as the instructor demonstrates fine-tuning RoBERTa to classify consumer finance complaints using Google Colab with a V100 GPU. Gain insights into the end-to-end process, including access to a detailed notebook and technical blog. Discover how to optimize your GPU usage while achieving effective model fine-tuning. Explore additional resources on building LLM-powered applications, understanding precision and recall, and booking consultations for further guidance.

Syllabus

Fine-tune your LLMs, Without Maxing out Your GPU!


Taught by

Data Centric

Related Courses

Applied Text Mining in Python
University of Michigan via Coursera
Natural Language Processing
Higher School of Economics via Coursera
Exploitez des données textuelles
CentraleSupélec via OpenClassrooms
Basic Sentiment Analysis with TensorFlow
Coursera Project Network via Coursera
Build Multilayer Perceptron Models with Keras
Coursera Project Network via Coursera