Auditing Data Privacy for Machine Learning
Offered By: USENIX Enigma Conference via YouTube
Course Description
Overview
Explore the critical issue of data privacy in machine learning through this 18-minute conference talk from USENIX Enigma 2022. Delve into the risks posed by large machine learning models that memorize significant amounts of individual data from their training sets. Learn about inference attacks, particularly membership inference attacks, and their role in measuring information leakage from models. Examine real-world examples from major tech companies and various sensitive datasets to understand the privacy implications. Discover the importance of auditing tools like ML Privacy Meter in assessing and mitigating privacy risks. Gain insights into the differences between privacy and confidentiality, the vulnerabilities of models to inference attacks, and methodologies for quantifying privacy risk. Understand the relevance of these concepts to ML engineers, policymakers, and researchers in developing privacy-conscious machine learning systems.
Syllabus
Intro
Main Takeaways . There is a difference between confidentiality and privacy
Privacy Regulations
Indirect Privacy Risks in Machine Learning
Machine Learning as a Service Platforms
Large Language Models
Federated Learning Algorithms
Membership Inference Attack
Al Regulations and Guidelines
Example: Language Generative Model
Examples of Vulnerable Training Data
Example: Image Classification Tasks
Auditing Data Privacy for Machine Learning
Taught by
USENIX Enigma Conference
Related Courses
Introduction to Data Analytics for BusinessUniversity of Colorado Boulder via Coursera Digital and the Everyday: from codes to cloud
NPTEL via Swayam Systems and Application Security
(ISC)² via Coursera Protecting Health Data in the Modern Age: Getting to Grips with the GDPR
University of Groningen via FutureLearn Teaching Impacts of Technology: Data Collection, Use, and Privacy
University of California, San Diego via Coursera