YoVDO

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

Offered By: USENIX via YouTube

Tags

USENIX Security Courses Cryptography Courses Machine Learning Security Courses

Course Description

Overview

Explore the challenges and lessons learned in evaluating defenses against adversarial examples in machine learning classifiers during this 48-minute USENIX Security '19 conference talk. Delve into common evaluation pitfalls, recommendations for thorough defense assessments, and comparisons between this emerging research field and established security evaluation practices. Gain insights from Research Scientist Nicholas Carlini of Google Research as he surveys the ways defenses have been broken and discusses the implications for future research. Learn about adversarial training, input transformations, and the importance of robust evaluation techniques in developing resilient machine learning models.

Syllabus

Introduction
Adversarial Examples
Why Care
What are Defenses
Adversarial Training
Thermometer Encoding
Input Transformation
Evaluating the robustness
Why are defenses easily broken
Lessons Learned
Adversary Training
Empty Set
Evaluating Adversely
Actionable Advice
Evaluation
Holding Out Data
FGSM
Gradient Descent
No Bounds
Random Classification
Negative Things
Evaluate Against the Worst Attack
Accuracy vs Distortion
Verification
Gradient Free
Random Noise
Conclusion
AES 1997
Attack success rates in insecurity
Why are we not yet crypto
How much we can prove
Still a lot of work to do
L2 Distortion
We dont know what we want
We dont have that today
Summary
Questions


Taught by

USENIX

Related Courses

Certified Ethical Hacker (CEH) - Linux Academy's Prep Course
A Cloud Guru
Certified Information Systems Security Professional (CISSP)
A Cloud Guru
CompTIA Security+ Certification Prep
A Cloud Guru
Encryption Fundamentals
A Cloud Guru
LPIC-3 Exam 303: Security
A Cloud Guru