YoVDO

Privacy Backdoors: Stealing Data with Corrupted Pretrained Models - Explained

Offered By: Yannic Kilcher via YouTube

Tags

Machine Learning Courses Transformers Courses Data Privacy Courses Differential Privacy Courses Supply Chain Attacks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a detailed analysis of privacy backdoors in pretrained machine learning models through this comprehensive video lecture. Delve into the potential risks of fine-tuning downloaded models and learn about a method that allows attackers to fully compromise the privacy of fine-tuning data. Examine the core concept of single-use data traps, investigate how backdoors can be implemented in transformer models, and discover additional numerical techniques. Gain insights into experimental results and conclusions drawn from this research. Understand the implications of this supply chain attack on machine learning privacy and its impact on models trained with differential privacy.

Syllabus

- Intro & Overview
-Core idea: single-use data traps
- Backdoors in transformer models
- Additional numerical tricks
- Experimental results & conclusion


Taught by

Yannic Kilcher

Related Courses

Statistical Machine Learning
Carnegie Mellon University via Independent
Secure and Private AI
Facebook via Udacity
Data Privacy and Anonymization in R
DataCamp
Build and operate machine learning solutions with Azure Machine Learning
Microsoft via Microsoft Learn
Data Privacy and Anonymization in Python
DataCamp