Privacy Backdoors: Stealing Data with Corrupted Pretrained Models - Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a detailed analysis of privacy backdoors in pretrained machine learning models through this comprehensive video lecture. Delve into the potential risks of fine-tuning downloaded models and learn about a method that allows attackers to fully compromise the privacy of fine-tuning data. Examine the core concept of single-use data traps, investigate how backdoors can be implemented in transformer models, and discover additional numerical techniques. Gain insights into experimental results and conclusions drawn from this research. Understand the implications of this supply chain attack on machine learning privacy and its impact on models trained with differential privacy.
Syllabus
- Intro & Overview
-Core idea: single-use data traps
- Backdoors in transformer models
- Additional numerical tricks
- Experimental results & conclusion
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent