Aligning Open Language Models - Stanford CS25 Lecture
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the evolution of open language models in this Stanford University lecture featuring Nathan Lambert from the Allen Institute for AI. Dive into the major developments in open chat, instruct, and aligned models since ChatGPT's emergence. Learn about key techniques, datasets, and models including Alpaca, QLoRA, DPO, and PPO. Gain insights into the future of aligning open language models and access accompanying slides and HuggingFace model collections. Benefit from Lambert's expertise as a Research Scientist focusing on RLHF and his background in machine learning and robotics. Part of the Stanford CS25 Transformers United series, this 1-hour 16-minute talk offers a comprehensive overview of the rapidly evolving field of language model alignment.
Syllabus
Stanford CS25: V4 I Aligning Open Language Models
Taught by
Stanford Online
Tags
Related Courses
ChatGPT et IA : mode d'emploi pour managers et RHCNAM via France Université Numerique Generating New Recipes using GPT-2
Coursera Project Network via Coursera Deep Learning NLP: Training GPT-2 from scratch
Coursera Project Network via Coursera Data Science A-Z: Hands-On Exercises & ChatGPT Prize [2024]
Udemy Deep Learning A-Z 2024: Neural Networks, AI & ChatGPT Prize
Udemy