Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Offered By: USENIX via YouTube
Course Description
Overview
Explore a critical analysis of privacy protection mechanisms in federated learning presented at USENIX Security '23. Delve into a new reconstruction attack framework for image classification tasks that challenges the effectiveness of gradient obfuscation techniques. Examine how common gradient postprocessing procedures, including quantization, sparsification, and perturbation, may provide a false sense of security. Discover a novel method for reconstructing images at the semantic level and learn about the quantification of semantic privacy leakage. Compare this approach with conventional image similarity scores and understand the implications for evaluating image data leakage in federated learning. Gain insights into the urgent need for revisiting and redesigning privacy protection mechanisms in existing federated learning algorithms to ensure robust client data security.
Syllabus
USENIX Security '23 - Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Taught by
USENIX
Related Courses
Private Stochastic Convex Optimization: Optimal Rates in Linear TimeAssociation for Computing Machinery (ACM) via YouTube ABY3 - A Mixed Protocol Framework for Machine Learning
Association for Computing Machinery (ACM) via YouTube Protect Privacy in a Data-Driven World - Privacy-Preserving Machine Learning
RSA Conference via YouTube Privacy-Preserving Algorithms for Decentralised Collaborative Learning - Dr Aurélien Bellet
Alan Turing Institute via YouTube CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU
IEEE via YouTube