Provable Robustness Beyond Bound Propagation
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the frontiers of deep learning in this 48-minute lecture by Zico Kolter from Carnegie Mellon University. Delve into the critical topic of provable robustness in deep learning systems, moving beyond traditional bound propagation techniques. Gain insights into adversarial attacks, their significance, and the concept of adversarial robustness. Examine the causes of adversarial examples and evaluate randomization as a potential defense mechanism. Discover the visual intuition behind randomized smoothing and understand its guarantees. Follow the proof of certified robustness, while considering important caveats. Compare the presented approach with previous state-of-the-art methods on CIFAR10 and assess its performance on ImageNet. Enhance your understanding of advanced deep learning concepts and their practical implications in this comprehensive talk from the Simons Institute's "Frontiers of Deep Learning" series.
Syllabus
Intro
Adversarial attacks on deep learning
Why should we care?
Adversarial robustness
How to we strictly upper bound the maximization?
This talk
What causes adversarial examples?
Randomization as a defense?
Visual intuition of randomized smoothing
The randomized smoothing guarantee
Proof of certified robustness (cont)
Caveats (a.k.a. the fine print)
Comparison to previous SOTA on CIFAR10
Performance on ImageNet
Taught by
Simons Institute
Related Courses
AI for CybersecurityJohns Hopkins University via Coursera Securing AI and Advanced Topics
Johns Hopkins University via Coursera Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning Responsible & Safe AI Systems
NPTEL via Swayam Intro to Testing Machine Learning Models
Test Automation University