YoVDO

Exploiting Unintended Feature Leakage in Collaborative Learning - Congzheng Song

Offered By: IEEE via YouTube

Tags

Federated Learning Courses Cybersecurity Courses Machine Learning Courses Privacy Courses

Course Description

Overview

Explore a conference talk that delves into the security vulnerabilities of collaborative machine learning techniques, focusing on unintended feature leakage. Learn about passive and active inference attacks that can exploit model updates to infer sensitive information about participants' training data. Discover how adversaries can perform membership inference and property inference attacks, potentially compromising privacy in distributed learning environments. Examine various tasks, datasets, and learning configurations to understand the scope and limitations of these attacks. Gain insights into possible defense mechanisms against such vulnerabilities in collaborative learning systems.

Syllabus

Intro
Overview
Deep Learning Background
Distributed / Federated Learning
Threat Model
Leakage from model updates
Property Inference Attacks
Infer Property Two-Party Experiment
Active Attack Works Even Better
Multi-Party Experiments
Visualize Leakage in Feature Space
Takeaways


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Secure and Private AI
Facebook via Udacity
Advanced Deployment Scenarios with TensorFlow
DeepLearning.AI via Coursera
Big Data for Reliability and Security
Purdue University via edX
MLOps for Scaling TinyML
Harvard University via edX
Edge Analytics: IoT and Data Science
LinkedIn Learning