YoVDO

LLM Safety, Alignment, and Generalization

Offered By: Simons Institute via YouTube

Tags

AI Ethics Courses Machine Learning Courses Autonomous Systems Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on the critical aspects of Large Language Model (LLM) safety, alignment, and generalization. Delve into the challenges of ruling out catastrophic harms as LLM capabilities rapidly improve across various domains. Understand the importance of making affirmative safety cases for LLMs and the need to comprehend their motivational structures, especially as they become capable of complex autonomous plans. Examine the necessity for developing a science of LLM generalization to understand how training data influences a model's beliefs and motivations. Learn from Roger Grosse of the University of Toronto as part of the Simons Institute's Special Year on Large Language Models and Transformers: Part 1 Boot Camp.

Syllabus

LLM Safety, Alignment, and Generalization


Taught by

Simons Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent