YoVDO

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Offered By: USENIX via YouTube

Tags

USENIX Security Courses Federated Learning Courses Privacy-Preserving Machine Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a critical analysis of privacy protection mechanisms in federated learning presented at USENIX Security '23. Delve into a new reconstruction attack framework for image classification tasks that challenges the effectiveness of gradient obfuscation techniques. Examine how common gradient postprocessing procedures, including quantization, sparsification, and perturbation, may provide a false sense of security. Discover a novel method for reconstructing images at the semantic level and learn about the quantification of semantic privacy leakage. Compare this approach with conventional image similarity scores and understand the implications for evaluating image data leakage in federated learning. Gain insights into the urgent need for revisiting and redesigning privacy protection mechanisms in existing federated learning algorithms to ensure robust client data security.

Syllabus

USENIX Security '23 - Gradient Obfuscation Gives a False Sense of Security in Federated Learning


Taught by

USENIX

Related Courses

Never Been KIST - Tor’s Congestion Management Blossoms with Kernel-Informed Socket Transport
USENIX via YouTube
Eclipse Attacks on Bitcoin’s Peer-to-Peer Network
USENIX via YouTube
Control-Flow Bending - On the Effectiveness of Control-Flow Integrity
USENIX via YouTube
Protecting Privacy of BLE Device Users
USENIX via YouTube
K-Fingerprinting - A Robust Scalable Website Fingerprinting Technique
USENIX via YouTube