YoVDO

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

Offered By: USENIX via YouTube

Tags

USENIX Security Courses Cryptography Courses Machine Learning Security Courses

Course Description

Overview

Explore the challenges and lessons learned in evaluating defenses against adversarial examples in machine learning classifiers during this 48-minute USENIX Security '19 conference talk. Delve into common evaluation pitfalls, recommendations for thorough defense assessments, and comparisons between this emerging research field and established security evaluation practices. Gain insights from Research Scientist Nicholas Carlini of Google Research as he surveys the ways defenses have been broken and discusses the implications for future research. Learn about adversarial training, input transformations, and the importance of robust evaluation techniques in developing resilient machine learning models.

Syllabus

Introduction
Adversarial Examples
Why Care
What are Defenses
Adversarial Training
Thermometer Encoding
Input Transformation
Evaluating the robustness
Why are defenses easily broken
Lessons Learned
Adversary Training
Empty Set
Evaluating Adversely
Actionable Advice
Evaluation
Holding Out Data
FGSM
Gradient Descent
No Bounds
Random Classification
Negative Things
Evaluate Against the Worst Attack
Accuracy vs Distortion
Verification
Gradient Free
Random Noise
Conclusion
AES 1997
Attack success rates in insecurity
Why are we not yet crypto
How much we can prove
Still a lot of work to do
L2 Distortion
We dont know what we want
We dont have that today
Summary
Questions


Taught by

USENIX

Related Courses

Applied Cryptography
University of Virginia via Udacity
Cryptography II
Stanford University via Coursera
Coding the Matrix: Linear Algebra through Computer Science Applications
Brown University via Coursera
Cryptography I
Stanford University via Coursera
Unpredictable? Randomness, Chance and Free Will
National University of Singapore via Coursera