YoVDO

Adversarial Examples in Machine Learning - Crafting and Defending Against Attacks

Offered By: USENIX Enigma Conference via YouTube

Tags

Machine Learning Security Courses Image Classification Courses Text Classification Courses Malware Detection Courses MNIST Dataset Courses

Course Description

Overview

Explore the vulnerabilities of machine learning models to adversarial examples in this 20-minute conference talk from USENIX Enigma 2017. Delve into the world of subtly modified malicious inputs that can compromise the integrity of model outputs, potentially affecting various systems from vehicle control to spam detection. Learn about misclassification attacks on image, text, and malware classifiers, and discover how adversarial examples can transfer between different models. Gain practical knowledge through a hands-on tutorial on adversarial example crafting, covering algorithms, threat models, and proposed defenses. Join Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University, as he guides you through the intricacies of this critical aspect of machine learning security.

Syllabus

Intro
Successes of machine learning
Failures of machine learning: Dave's talk
Crafting adversarial examples: fast gradient sign method
Threat model of a black-box attack
Our approach to black-box attacks
Adversarial example transferability
Intra-technique transferability: cross training data
Cross-technique transferability
Attacking remotely hosted black-box models
Results on real-world remote systems
Hands-on tutorial with the MNIST dataset


Taught by

USENIX Enigma Conference

Related Courses

Introducción al Análisis del Malware en Windows
National Technological University – Buenos Aires Regional Faculty via Miríadax
The Complete Cyber Security Course : End Point Protection!
Udemy
Máster en Seguridad Informática. Curso completo de Hacking.
Udemy
Network Analysis with Arkime
Pluralsight
Configuring Firepower Threat Defense (FTD) Integrations
Pluralsight