YoVDO

Classifiers Under Attack: Evasion Techniques and Defensive Strategies

Offered By: USENIX Enigma Conference via YouTube

Tags

Machine Learning Security Courses Cybersecurity Courses Random Forests Courses Malware Detection Courses Adversarial Attacks Courses

Course Description

Overview

Explore the vulnerabilities of machine learning classifiers in security applications through this 20-minute conference talk from USENIX Enigma 2017. Delve into the reasons why classifiers, despite performing well in testing, can be easily thwarted by motivated adversaries in real-world scenarios. Examine how attackers construct evasive variants that are misclassified as benign, and understand the inherent fragility of many machine learning techniques, including deep neural networks. Learn about successful evasion techniques, including automated methods, and discover potential strategies to enhance classifier robustness against adversarial attacks. Gain insights into evaluating the resilience of deployed classifiers in adversarial environments, and understand the implications for the future of machine learning in security applications.

Syllabus

Intro
Adversaries Don't Cooperate
Focus: Evasion Attacks
PDF Malware Classifiers
Random Forest
Automated Classifier Evasion Using Genetic Programming
Goal: Find Evasive Variant
Start with Malicious Seed
Generating Variants
Selecting Promising Variants
Oracle
Fitness Function
Classifier Performance
Execution Cost
Retraining Classifier
Hide Classifier "Security Through Obscurity"
Cross-Evasion Effects
Evading Gmail's Classifier
Conclusion


Taught by

USENIX Enigma Conference

Related Courses

Practical Machine Learning
Johns Hopkins University via Coursera
Detección de objetos
Universitat Autònoma de Barcelona (Autonomous University of Barcelona) via Coursera
Practical Machine Learning on H2O
H2O.ai via Coursera
Modélisez vos données avec les méthodes ensemblistes
CentraleSupélec via OpenClassrooms
Introduction to Machine Learning for Coders!
fast.ai via Independent