YoVDO

Adversary Instantiation - Lower Bounds for Differentially Private Machine Learning

Offered By: IEEE via YouTube

Tags

Differential Privacy Courses

Course Description

Overview

Explore the challenges and limitations of differentially private machine learning in this 15-minute IEEE presentation. Delve into the concept of adversary instantiation and its implications for establishing lower bounds in privacy-preserving ML algorithms. Learn about the non-private nature of traditional machine learning, the integration of differential privacy, and the importance of calculating epsilon. Focus on Differentially Private Stochastic Gradient Descent (DPSGD) and examine key topics such as membership inference, worst-case scenarios, intermediate model access, and adaptive distinguishers. Gain insights into gradient poisoning attacks and their impact on privacy guarantees in machine learning systems.

Syllabus

Intro
Machine Learning Is Not Private
Machine learning with Differential Privacy
We want to calculate the epsilon.
We Focus on DPSGD!
Membership inference
Worst-Case Example
Intermediate Model Access
Adaptive Intermediate Model Acce Distinguisher
Gradient Poisoning Attack


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Statistical Machine Learning
Carnegie Mellon University via Independent
Secure and Private AI
Facebook via Udacity
Data Privacy and Anonymization in R
DataCamp
Build and operate machine learning solutions with Azure Machine Learning
Microsoft via Microsoft Learn
Data Privacy and Anonymization in Python
DataCamp