Leveraging Human Input to Enable Robust AI Systems
Offered By: Stanford University via YouTube
Course Description
Overview
Explore a Stanford seminar on leveraging human input for robust AI systems. Delve into Daniel S. Brown's research on incorporating human feedback to enhance AI safety and performance. Learn about maintaining uncertainty over human intent, generating risk-averse behaviors, and efficiently querying for additional human input. Discover approaches for developing AI systems that can accurately interpret and respond to human guidance. Gain insights into the long-term vision for safe and robust AI, including learning from multi-modal human input, interpretable robustness, and human-in-the-loop machine learning techniques that extend beyond reward function uncertainty.
Syllabus
Stanford Seminar - Leveraging Human Input to Enable Robust AI Systems, Daniel S. Brown
Taught by
Stanford Online
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent