How to Train a Large Language Model
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Dive into a comprehensive conference talk on training large language models presented by Sam Smith of Google DeepMind at IPAM's Theory and Practice of Deep Learning Workshop. Explore key practical concepts behind LLM training, including a brief introduction to Transformers and the dominance of MLPs in computation. Gain insights into computational bottlenecks on TPUs and GPUs, techniques for training models too large for single-device memory, scaling laws, and hyper-parameter tuning. Delve into a detailed discussion of LLM inference and, time permitting, discover the design of recurrent models competitive with transformers, along with their advantages and drawbacks. Drawing from experiences with Griffin and RecurrentGemma, this 53-minute presentation offers valuable knowledge for those interested in the intricacies of LLM development and scaling.
Syllabus
Sam Smith - How to train an LLM - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX