Classifiers Under Attack: Evasion Techniques and Defensive Strategies
Offered By: USENIX Enigma Conference via YouTube
Course Description
Overview
Explore the vulnerabilities of machine learning classifiers in security applications through this 20-minute conference talk from USENIX Enigma 2017. Delve into the reasons why classifiers, despite performing well in testing, can be easily thwarted by motivated adversaries in real-world scenarios. Examine how attackers construct evasive variants that are misclassified as benign, and understand the inherent fragility of many machine learning techniques, including deep neural networks. Learn about successful evasion techniques, including automated methods, and discover potential strategies to enhance classifier robustness against adversarial attacks. Gain insights into evaluating the resilience of deployed classifiers in adversarial environments, and understand the implications for the future of machine learning in security applications.
Syllabus
Intro
Adversaries Don't Cooperate
Focus: Evasion Attacks
PDF Malware Classifiers
Random Forest
Automated Classifier Evasion Using Genetic Programming
Goal: Find Evasive Variant
Start with Malicious Seed
Generating Variants
Selecting Promising Variants
Oracle
Fitness Function
Classifier Performance
Execution Cost
Retraining Classifier
Hide Classifier "Security Through Obscurity"
Cross-Evasion Effects
Evading Gmail's Classifier
Conclusion
Taught by
USENIX Enigma Conference
Related Courses
Computer SecurityStanford University via Coursera Cryptography II
Stanford University via Coursera Malicious Software and its Underground Economy: Two Sides to Every Story
University of London International Programmes via Coursera Building an Information Risk Management Toolkit
University of Washington via Coursera Introduction to Cybersecurity
National Cybersecurity Institute at Excelsior College via Canvas Network