Misspecification, and Uncertainty Quantification in Differential Privacy
Offered By: Fields Institute via YouTube
Course Description
Overview
Explore a 32-minute lecture on misspecification and uncertainty quantification in differential privacy, delivered by Jeremy Seeman from The Pennsylvania State University at the Fields Institute. Delve into the theoretical guarantees of differential privacy, examining how implementation details are abstracted and why this matters. Learn about adversary public information, privacy in relation to plausible data generating processes, and units of analysis for e-DPZ. Investigate misspecification in z, the design of 4-DPZ mechanisms, and utility and uncertainty quantification. Discuss mechanism choices, operationalization, and the meaning of correcting for measurement error. Consider methodological transparency, design versus adjustment, and inferential adjustment versus post-processing. Gain insights into these complex topics as part of the "Workshop on Differential Privacy and Statistical Data Analysis."
Syllabus
Intro
Overview
Differential privacy's theoretical guarantees
Example: abstracting away implementation details
Why does this matter?
Notation and Problem Setup
Adversary Public Information
DP and public information
Privacy given plausible data generating processes
Units of analysis for e-DPZ
Misspecification in z
Designing 4-DPZ mechanisms
Utility and Uncertainty Quantification
Mechanism choices and operationalization
What it means to correct for measurement error?
Methodological Transparency
Design vs. adjustment
Inferential Adjustment vs. Post-Processing
Ideas for the workshop
Taught by
Fields Institute
Related Courses
Statistical Machine LearningCarnegie Mellon University via Independent Secure and Private AI
Facebook via Udacity Data Privacy and Anonymization in R
DataCamp Build and operate machine learning solutions with Azure Machine Learning
Microsoft via Microsoft Learn Data Privacy and Anonymization in Python
DataCamp