YoVDO

Fine-tuning LLMs - Every Step Explained for Memorization Tasks

Offered By: Trelis Research via YouTube

Tags

Fine-Tuning Courses Hyperparameter Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive deep into the intricacies of fine-tuning Large Language Models (LLMs) with a comprehensive 47-minute video tutorial. Explore key concepts such as GPTs as statistical models, the reversal curse, and synthetic dataset generation. Learn practical skills including selecting optimal batch sizes, determining appropriate learning rates, and choosing the right number of training epochs. Follow along with step-by-step instructions for dataset generation and fine-tuning script implementation. Analyze performance through hyperparameter ablation studies and base model comparisons. Conclude with valuable recommendations for fine-tuning LLMs specifically for memorization tasks, equipping you with the knowledge to enhance model performance in your own projects.

Syllabus

Fine-tuning on a custom dataset
Video Overview
GPTs as statistical models
What is the reversal curse?
Synthetic dataset generation
Choosing the best batch size
What learning rate to use for fine-tuning?
How many epochs to train for?
Choosing the right base model
Step by step dataset generation
Fine-tuning script, step-by-step
Performance Ablation: Hyperparameters
Performance Ablation: Base Models
Final Recommendations for Fine-tuning for Memorization


Taught by

Trelis Research

Related Courses

Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
DeepLearning.AI via Coursera
Machine Learning in the Enterprise
Google Cloud via Coursera
Art and Science of Machine Learning 日本語版
Google Cloud via Coursera
Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera
Art and Science of Machine Learning en Español
Google Cloud via Coursera