YoVDO

How Could We Design Aligned and Provably Safe AI?

Offered By: Inside Livermore Lab via YouTube

Tags

Artificial Intelligence Courses Deep Learning Courses Neural Networks Courses Risk Assessment Courses Bayesian Inference Courses Uncertainty Quantification Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking seminar on designing aligned and provably safe AI systems, presented by Dr. Yoshua Bengio, a Turing Award winner and world-renowned AI expert. Delve into the challenges of evaluating risks in learned AI systems and discover a potential solution through run-time risk assessment. Examine the concept of bounding the probability of harm using Bayesian approaches and neural networks, while considering the importance of capturing epistemic uncertainty. Learn about the research program based on these ideas and the potential application of amortized inference with large neural networks for estimating required quantities. Gain valuable insights into the future of AI safety and alignment from one of the pioneers in deep learning.

Syllabus

DSI Seminar Series | How Could We Design Aligned and Provably Safe AI?


Taught by

Inside Livermore Lab

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX