Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training
Offered By: Data Centric via YouTube
Course Description
Overview
Learn how to utilize LoRA (Low Rank Adapters) for parameter-efficient fine-tuning of large language models in this 47-minute video. Follow along as the instructor demonstrates fine-tuning RoBERTa to classify consumer finance complaints using Google Colab with a V100 GPU. Gain insights into the end-to-end process, including access to a detailed notebook and technical blog. Discover how to optimize your GPU usage while achieving effective model fine-tuning. Explore additional resources on building LLM-powered applications, understanding precision and recall, and booking consultations for further guidance.
Syllabus
Fine-tune your LLMs, Without Maxing out Your GPU!
Taught by
Data Centric
Related Courses
Applied Text Mining in PythonUniversity of Michigan via Coursera Natural Language Processing
Higher School of Economics via Coursera Exploitez des données textuelles
CentraleSupélec via OpenClassrooms Basic Sentiment Analysis with TensorFlow
Coursera Project Network via Coursera Build Multilayer Perceptron Models with Keras
Coursera Project Network via Coursera