YoVDO

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Offered By: USENIX via YouTube

Tags

USENIX Security Courses Data Science Courses Machine Learning Courses Privacy Courses Vulnerability Analysis Courses

Course Description

Overview

Explore a 14-minute conference talk from USENIX Security '22 examining novel model inversion attribute inference attacks on classification models. Delve into the potential privacy risks associated with machine learning technologies in sensitive domains. Learn about confidence score-based and label-only model inversion attacks that outperform existing methods. Understand how these attacks can infer sensitive attributes from training data using only black-box access to the target model. Examine the evaluation of these attacks on decision tree and deep neural network models trained on real datasets. Discover the concept of disparate vulnerability, where specific groups in the training dataset may be more susceptible to model inversion attacks. Gain insights into the implications for privacy and security in machine learning applications.

Syllabus

Intro
What is a Model Inversion Attack?
Model Inversion Attack Types
Model Inversion- Sensitive Attribute Inference
Existing Attacks and Defenses
LOMIA Intuition
LOMIA Attack Model Training
Experiment Setup
Attack Results
Disparate Vulnerability of Model Inversion
Conclusion


Taught by

USENIX

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent