Pretraining LLMs
Offered By: DeepLearning.AI via Coursera
Course Description
Overview
In Pretraining LLMs you’ll explore the first step of training large language models using a technique called pretraining. You’ll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.
Pretraining involves teaching an LLM to predict the next token using vast text datasets, resulting in a base model, and this base model requires further fine-tuning for optimal performance and safety. In this course, you’ll learn to pretrain a model from scratch and also to take a model that’s already been pretrained and continue the pretraining process on your own data.
In detail:
1. Explore scenarios where pretraining is the optimal choice for model performance. Compare text generation across different versions of the same model to understand the performance differences between base, fine-tuned, and specialized pre-trained models.
2. Learn how to create a high-quality training dataset using web text and existing datasets, which is crucial for effective model pretraining.
3. Prepare your cleaned dataset for training. Learn how to package your training data for use with the Hugging Face library.
4. Explore ways to configure and initialize a model for training and see how these choices impact the speed of pretraining.
5. Learn how to configure and execute a training run, enabling you to train your own model.
6. Learn how to assess your trained model’s performance and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models’ performance.
After taking this course, you’ll be equipped with the skills to pretrain a model—from data preparation and model configuration to performance evaluation.
Syllabus
- Pretraining LLMs
- In Pretraining LLMs you’ll explore the first step of training large language models using a technique called pretraining. You’ll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.Pretraining involves teaching an LLM to predict the next token using vast text datasets, resulting in a base model, and this base model requires further fine-tuning for optimal performance and safety. In this course, you’ll learn to pretrain a model from scratch and also to take a model that’s already been pretrained and continue the pretraining process on your own data. In detail: 1. Explore scenarios where pretraining is the optimal choice for model performance. Compare text generation across different versions of the same model to understand the performance differences between base, fine-tuned, and specialized pre-trained models. 2. Learn how to create a high-quality training dataset using web text and existing datasets, which is crucial for effective model pretraining. 3. Prepare your cleaned dataset for training. Learn how to package your training data for use with the Hugging Face library. 4. Explore ways to configure and initialize a model for training and see how these choices impact the speed of pretraining. 5. Learn how to configure and execute a training run, enabling you to train your own model. 6. Learn how to assess your trained model’s performance and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models’ performance. After taking this course, you’ll be equipped with the skills to pretrain a model—from data preparation and model configuration to performance evaluation.
Taught by
Lucy Park and Sung Kim
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube