YoVDO

Membership Inference Attacks against Machine Learning Models

Offered By: IEEE via YouTube

Tags

Adversarial Machine Learning Courses Data Privacy Courses Machine Learning Models Courses

Course Description

Overview

Explore membership inference attacks against machine learning models in this IEEE Symposium on Security & Privacy conference talk. Delve into how individual data records used for training can be leaked by models, focusing on the basic membership inference attack. Learn to determine if a specific record was part of a model's training dataset using only black-box access. Discover techniques for training adversarial inference models to recognize differences in predictions on training versus non-training inputs. Examine empirical evaluations of these inference techniques on classification models from commercial "machine learning as a service" providers. Investigate factors influencing data leakage and evaluate mitigation strategies using realistic datasets, including a sensitive hospital discharge dataset. Gain insights into machine learning privacy, summary statistics exploitation, shadow models, and constructing effective attack models.

Syllabus

Intro
Machine Learning as a Service
Machine Learning Privacy
Membership Inference Attack on Summary Statistics
Exploit Model's Predictions
ML against ML
Shadow Models
Obtaining Data for Training
Synthesis using the Model
Constructing the Attack Model
Not in a Direct Conflict!


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

TinyML Talks - Software-Hardware Co-design for Tiny AI Systems
tinyML via YouTube
Cross-Domain Transferability of Adversarial Perturbations - CAP6412 Spring 2021
University of Central Florida via YouTube
InfoSec Deep Learning in Action
nullcon via YouTube
Zen and the Art of Adversarial Machine Learning
Black Hat via YouTube
Practical Defenses Against Adversarial Machine Learning
Black Hat via YouTube