Aligning Open Language Models - Stanford CS25 Lecture
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the evolution of open language models in this Stanford University lecture featuring Nathan Lambert from the Allen Institute for AI. Dive into the major developments in open chat, instruct, and aligned models since ChatGPT's emergence. Learn about key techniques, datasets, and models including Alpaca, QLoRA, DPO, and PPO. Gain insights into the future of aligning open language models and access accompanying slides and HuggingFace model collections. Benefit from Lambert's expertise as a Research Scientist focusing on RLHF and his background in machine learning and robotics. Part of the Stanford CS25 Transformers United series, this 1-hour 16-minute talk offers a comprehensive overview of the rapidly evolving field of language model alignment.
Syllabus
Stanford CS25: V4 I Aligning Open Language Models
Taught by
Stanford Online
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent