How to Train a Large Language Model
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Dive into a comprehensive conference talk on training large language models presented by Sam Smith of Google DeepMind at IPAM's Theory and Practice of Deep Learning Workshop. Explore key practical concepts behind LLM training, including a brief introduction to Transformers and the dominance of MLPs in computation. Gain insights into computational bottlenecks on TPUs and GPUs, techniques for training models too large for single-device memory, scaling laws, and hyper-parameter tuning. Delve into a detailed discussion of LLM inference and, time permitting, discover the design of recurrent models competitive with transformers, along with their advantages and drawbacks. Drawing from experiences with Griffin and RecurrentGemma, this 53-minute presentation offers valuable knowledge for those interested in the intricacies of LLM development and scaling.
Syllabus
Sam Smith - How to train an LLM - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Linear CircuitsGeorgia Institute of Technology via Coursera مقدمة في هندسة الطاقة والقوى
King Abdulaziz University via Rwaq (رواق) Magnetic Materials and Devices
Massachusetts Institute of Technology via edX Linear Circuits 2: AC Analysis
Georgia Institute of Technology via Coursera Transmisión de energía eléctrica
Tecnológico de Monterrey via edX