YoVDO

Membership Inference Attacks against Machine Learning Models

Offered By: IEEE via YouTube

Tags

Adversarial Machine Learning Courses Data Privacy Courses Machine Learning Models Courses

Course Description

Overview

Explore membership inference attacks against machine learning models in this IEEE Symposium on Security & Privacy conference talk. Delve into how individual data records used for training can be leaked by models, focusing on the basic membership inference attack. Learn to determine if a specific record was part of a model's training dataset using only black-box access. Discover techniques for training adversarial inference models to recognize differences in predictions on training versus non-training inputs. Examine empirical evaluations of these inference techniques on classification models from commercial "machine learning as a service" providers. Investigate factors influencing data leakage and evaluate mitigation strategies using realistic datasets, including a sensitive hospital discharge dataset. Gain insights into machine learning privacy, summary statistics exploitation, shadow models, and constructing effective attack models.

Syllabus

Intro
Machine Learning as a Service
Machine Learning Privacy
Membership Inference Attack on Summary Statistics
Exploit Model's Predictions
ML against ML
Shadow Models
Obtaining Data for Training
Synthesis using the Model
Constructing the Attack Model
Not in a Direct Conflict!


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Introduction to Data Analytics for Business
University of Colorado Boulder via Coursera
Digital and the Everyday: from codes to cloud
NPTEL via Swayam
Systems and Application Security
(ISC)² via Coursera
Protecting Health Data in the Modern Age: Getting to Grips with the GDPR
University of Groningen via FutureLearn
Teaching Impacts of Technology: Data Collection, Use, and Privacy
University of California, San Diego via Coursera