YoVDO

How to Train a Large Language Model

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Deep Learning Courses Neural Networks Courses Transformers Courses Attention Mechanisms Courses TPUs Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into a comprehensive conference talk on training large language models presented by Sam Smith of Google DeepMind at IPAM's Theory and Practice of Deep Learning Workshop. Explore key practical concepts behind LLM training, including a brief introduction to Transformers and the dominance of MLPs in computation. Gain insights into computational bottlenecks on TPUs and GPUs, techniques for training models too large for single-device memory, scaling laws, and hyper-parameter tuning. Delve into a detailed discussion of LLM inference and, time permitting, discover the design of recurrent models competitive with transformers, along with their advantages and drawbacks. Drawing from experiences with Griffin and RecurrentGemma, this 53-minute presentation offers valuable knowledge for those interested in the intricacies of LLM development and scaling.

Syllabus

Sam Smith - How to train an LLM - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam