How Could We Design Aligned and Provably Safe AI?
Offered By: Inside Livermore Lab via YouTube
Course Description
Overview
Explore a thought-provoking seminar on designing aligned and provably safe AI systems, presented by Dr. Yoshua Bengio, a Turing Award winner and world-renowned AI expert. Delve into the challenges of evaluating risks in learned AI systems and discover a potential solution through run-time risk assessment. Examine the concept of bounding the probability of harm using Bayesian approaches and neural networks, while considering the importance of capturing epistemic uncertainty. Learn about the research program based on these ideas and the potential application of amortized inference with large neural networks for estimating required quantities. Gain valuable insights into the future of AI safety and alignment from one of the pioneers in deep learning.
Syllabus
DSI Seminar Series | How Could We Design Aligned and Provably Safe AI?
Taught by
Inside Livermore Lab
Related Courses
Designing and Executing Information Security StrategiesUniversity of Washington via Coursera Caries Management by Risk Assessment (CAMBRA)
University of California, San Francisco via Coursera Diagnosing the Financial Health of a Business
Macquarie Graduate School of Management via Open2Study Enfermedades transfronterizas de los animales
MirÃadax Unethical Decision Making in Organizations
University of Lausanne via Coursera