A Sound Mind in a Vulnerable Body - Practical Hardware Attacks on Deep Learning
Offered By: USENIX Enigma Conference via YouTube
Course Description
Overview
Explore practical hardware attacks on deep learning systems in this USENIX Enigma Conference talk. Delve into the vulnerabilities of machine learning models running on hardware, examining fault-injection and side-channel attacks. Learn how flipping a single bit in a deep neural network's memory representation can drastically degrade prediction accuracy, and discover how cache side-channel attacks can reverse-engineer proprietary DNN architecture details. Gain insights into the under-studied topic of ML vulnerability to hardware attacks, and understand the need for additional ML-level defenses that account for robust properties. Consider the implications of these findings on the security of machine learning systems and the importance of addressing both the "soundness of mind" and the "vulnerable body" in ML security research.
Syllabus
Intro
Recent Work on Secure Machine Learning
Conventional View on ML Models' Robustness
We Propose A New Perspective!
Hardware Attacks Can Break Mathematically-Proven Guarantees
(Weak) Hardware Attacks Can Be Exploited in the Cloud
Prior Work's Perspective on a Model's Robustness
The Worst-Case Perturbation
Threat Model - Single-Bit Adversaries
Evaluate the Weakest Attacker with Multiple Bit-flips
Our Attack: Reconstruction of DNN Architectures from the Trace
We Can Identify the Layers Accessed While Computing
Solution: Generate All Candidate Architectures
Solution: Eliminate incompatible Candidates
Taught by
USENIX Enigma Conference
Related Courses
CompTIA PenTest+ CertificationA Cloud Guru AWS SimuLearn: Cyber Security Threats
Amazon Web Services via AWS Skill Builder Ethical Hacking
Cabrillo College via California Community Colleges System Network Security
City College of San Francisco via California Community Colleges System Ethical Hacking
Chaffey College via California Community Colleges System