YoVDO

When Machine Learning Isn't Private

Offered By: USENIX Enigma Conference via YouTube

Tags

USENIX Enigma Conference Courses GPT-2 Courses Differential Privacy Courses

Course Description

Overview

Explore the critical privacy concerns in machine learning models through this 23-minute conference talk from USENIX Enigma 2022. Delve into Nicholas Carlini's research at Google, uncovering how current models can leak personally-identifiable information from training datasets. Examine the case study of GPT-2, where up to 5% of output is directly copied from training data. Learn about the challenges in preventing data leakage, the ineffectiveness of ad-hoc privacy solutions, and the trade-offs of using differentially private gradient descent. Gain insights into potential research directions and practical techniques for testing model memorization, equipping both researchers and practitioners with valuable knowledge to address this pressing issue in the field of machine learning.

Syllabus

THE ADVANCED COMPUTING SYSTEMS ASSOCIATION
Do models leak training data?
Act I: Extracting Training Data
A New Attack: : Training Data Extraction
1. Generate a lot of data 2. Predict membership
Evaluation
Up to 5% of the output of language models is verbatim copied from the training dataset
Case study: GPT-2
Act II: Ad-hoc privacy isn't
Act III: Whatever can we do?
3. Use differential privacy
Questions?


Taught by

USENIX Enigma Conference

Related Courses

Adventures in Authentication and Authorization
USENIX Enigma Conference via YouTube
Navigating the Sandbox Buffet
USENIX Enigma Conference via YouTube
Meaningful Hardware Privacy for a Smart and Augmented Future
USENIX Enigma Conference via YouTube
Working on the Frontlines - Privacy and Security with Vulnerable Populations
USENIX Enigma Conference via YouTube
Myths and Lies in InfoSec
USENIX Enigma Conference via YouTube