YoVDO

Open Pretrained Transformer - ML Coding Series

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

Machine Learning Courses C++ Courses CUDA Courses

Course Description

Overview

Dive deep into the metaseq codebase behind Meta's large language model OPT-175B in this comprehensive video tutorial. Learn how to set up the code on your machine and explore key concepts of mixed precision training, including loss scaling and unscaling. Follow along as the instructor walks through the training script, constructs dummy tasks and datasets, builds the transformer model, and examines CUDA kernels in C++ code. Gain insights into the training loop, forward pass through a transformer, and crucial aspects of loss handling, scaling, and mixed precision. Perfect for those looking to understand the intricacies of large language model implementation and training.

Syllabus

Intro - open pretrained transformer
Setup creating the cond env
Setup patch the code
Collecting train script arguments
Training script walk-through
Constructing a dummy task
Building the transformer model
CUDA kernels C++ code
Preparing a dummy dataset
Training loop
Zero grad loss scaling
Forward pass through a transformer
IMPORTANT loss, scaling, mixed precision, error handling
Outro


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent