YoVDO

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

Offered By: Simons Institute via YouTube

Tags

Adversarial Attacks Courses Cybersecurity Courses Deep Learning Courses Threat Models Courses

Course Description

Overview

Explore the challenges and insights in evaluating defenses against adversarial examples in deep learning systems through this 46-minute talk by Nicholas Carlini from Google Brain. Delve into threat models, non-certified defenses, and case studies from ICLR 2018. Learn how to distinguish true robustness from apparent robustness and gain valuable lessons for conducting better evaluations. Understand the iterative process of attacking and defending to optimize learning in the field of adversarial examples.

Syllabus

Intro
How do we generate adversarial examples?
Threat Models
A threat model is a formal statement defining when a system is intended to be secure.
This talk: non-certified defenses
For example: adversarial training
How complete are evaluations?
Case Study: ICLR 2018
Broken Defenses Correct Defenses
Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples
Disentangling true robustness from apparent robustness is nontrivial
Lessons (2 of 2) performing better evaluations
To understand adversarial examples, repeatedly attack and defend, optimizing for lessons learned.


Taught by

Simons Institute

Related Courses

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning
How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube