LLM Safety, Alignment, and Generalization
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore a comprehensive lecture on the critical aspects of Large Language Model (LLM) safety, alignment, and generalization. Delve into the challenges of ruling out catastrophic harms as LLM capabilities rapidly improve across various domains. Understand the importance of making affirmative safety cases for LLMs and the need to comprehend their motivational structures, especially as they become capable of complex autonomous plans. Examine the necessity for developing a science of LLM generalization to understand how training data influences a model's beliefs and motivations. Learn from Roger Grosse of the University of Toronto as part of the Simons Institute's Special Year on Large Language Models and Transformers: Part 1 Boot Camp.
Syllabus
LLM Safety, Alignment, and Generalization
Taught by
Simons Institute
Related Courses
Knowledge-Based AI: Cognitive SystemsGeorgia Institute of Technology via Udacity AI for Everyone: Master the Basics
IBM via edX Introducción a La Inteligencia Artificial (IA)
IBM via Coursera AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn Artificial Intelligence Ethics in Action
LearnQuest via Coursera