Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Offered By: USENIX via YouTube
Course Description
Overview
Explore a 14-minute conference talk from USENIX Security '22 examining novel model inversion attribute inference attacks on classification models. Delve into the potential privacy risks associated with machine learning technologies in sensitive domains. Learn about confidence score-based and label-only model inversion attacks that outperform existing methods. Understand how these attacks can infer sensitive attributes from training data using only black-box access to the target model. Examine the evaluation of these attacks on decision tree and deep neural network models trained on real datasets. Discover the concept of disparate vulnerability, where specific groups in the training dataset may be more susceptible to model inversion attacks. Gain insights into the implications for privacy and security in machine learning applications.
Syllabus
Intro
What is a Model Inversion Attack?
Model Inversion Attack Types
Model Inversion- Sensitive Attribute Inference
Existing Attacks and Defenses
LOMIA Intuition
LOMIA Attack Model Training
Experiment Setup
Attack Results
Disparate Vulnerability of Model Inversion
Conclusion
Taught by
USENIX
Related Courses
Never Been KIST - Tor’s Congestion Management Blossoms with Kernel-Informed Socket TransportUSENIX via YouTube Eclipse Attacks on Bitcoin’s Peer-to-Peer Network
USENIX via YouTube Control-Flow Bending - On the Effectiveness of Control-Flow Integrity
USENIX via YouTube Protecting Privacy of BLE Device Users
USENIX via YouTube K-Fingerprinting - A Robust Scalable Website Fingerprinting Technique
USENIX via YouTube