YoVDO

Aligning Open Language Models - Stanford CS25 Lecture

Offered By: Stanford University via YouTube

Tags

Language Models Courses Machine Learning Courses ChatGPT Courses QLoRA Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the evolution of open language models in this Stanford University lecture featuring Nathan Lambert from the Allen Institute for AI. Dive into the major developments in open chat, instruct, and aligned models since ChatGPT's emergence. Learn about key techniques, datasets, and models including Alpaca, QLoRA, DPO, and PPO. Gain insights into the future of aligning open language models and access accompanying slides and HuggingFace model collections. Benefit from Lambert's expertise as a Research Scientist focusing on RLHF and his background in machine learning and robotics. Part of the Stanford CS25 Transformers United series, this 1-hour 16-minute talk offers a comprehensive overview of the rapidly evolving field of language model alignment.

Syllabus

Stanford CS25: V4 I Aligning Open Language Models


Taught by

Stanford Online

Tags

Related Courses

Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ Dataset
Venelin Valkov via YouTube
Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube
Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube
Generative AI: Fine-Tuning LLM Models Crash Course
Krish Naik via YouTube
Fine-Tuning LLM Models - Generative AI Course
freeCodeCamp