YoVDO

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Offered By: USENIX via YouTube

Tags

USENIX Security Courses Data Science Courses Machine Learning Courses Privacy Courses Vulnerability Analysis Courses

Course Description

Overview

Explore a 14-minute conference talk from USENIX Security '22 examining novel model inversion attribute inference attacks on classification models. Delve into the potential privacy risks associated with machine learning technologies in sensitive domains. Learn about confidence score-based and label-only model inversion attacks that outperform existing methods. Understand how these attacks can infer sensitive attributes from training data using only black-box access to the target model. Examine the evaluation of these attacks on decision tree and deep neural network models trained on real datasets. Discover the concept of disparate vulnerability, where specific groups in the training dataset may be more susceptible to model inversion attacks. Gain insights into the implications for privacy and security in machine learning applications.

Syllabus

Intro
What is a Model Inversion Attack?
Model Inversion Attack Types
Model Inversion- Sensitive Attribute Inference
Existing Attacks and Defenses
LOMIA Intuition
LOMIA Attack Model Training
Experiment Setup
Attack Results
Disparate Vulnerability of Model Inversion
Conclusion


Taught by

USENIX

Related Courses

Data Analysis
Johns Hopkins University via Coursera
Computing for Data Analysis
Johns Hopkins University via Coursera
Scientific Computing
University of Washington via Coursera
Introduction to Data Science
University of Washington via Coursera
Web Intelligence and Big Data
Indian Institute of Technology Delhi via Coursera