Model Robustness Will Hurt Data Privacy?
Offered By: Hack In The Box Security Conference via YouTube
Course Description
Overview
Explore the complex relationship between model robustness and data privacy in AI systems through this insightful conference talk from HITB2021AMS. Delve into the world of adversarial training and its unexpected consequences on data security. Discover how improving model robustness against adversarial attacks can inadvertently increase vulnerability to privacy breaches. Learn about gradient-matching techniques for reconstructing training data and the potential trade-offs between model security and user privacy. Gain valuable insights into the challenges of balancing AI system robustness with data protection, and understand the importance of considering both aspects in future research and development of secure AI technologies.
Syllabus
Introduction
Team
Outline
How to Build AI System
AI Security Challenges
Data Algorithm Model
AI Abuse
AI Security
adversarial attack
adversarial training
privacy attacks
model gradients
threat model
Evaluation metrics
Tradeoff
Conclusions
Appendix
Taught by
Hack In The Box Security Conference
Related Courses
Browser Hacking With ANGLEHack In The Box Security Conference via YouTube Can A Fuzzer Match A Human
Hack In The Box Security Conference via YouTube Biometrics System Hacking in the Age of the Smart Vehicle
Hack In The Box Security Conference via YouTube ICEFALL - Revisiting A Decade Of OT Insecure-By-Design Practices
Hack In The Box Security Conference via YouTube Fuzzing the MCU of Connected Vehicles for Security and Safety
Hack In The Box Security Conference via YouTube