Information Leakage of Neural Networks
Offered By: GAIA via YouTube
Course Description
Overview
Explore the critical issue of information leakage in neural networks through a 28-minute conference talk by Johan Östman, Research Scientist at AI Sweden. Delve into the challenges of handling sensitive data in machine learning models, especially when shared externally. Examine various attack vectors for extracting sensitive information from trained models across different adversarial settings. Learn about mitigation strategies and their effectiveness in protecting data privacy. Gain insights into the legal aspects and the importance of aligning technical and legal definitions of risk. Discover Johan Östman's work in privacy-preserving machine learning, including his leadership roles at AI Sweden and Chalmers University of Technology, as well as his involvement in federated learning projects to combat money laundering. Recorded at the 2024 GAIA Conference, this talk provides valuable knowledge for professionals and researchers concerned with data privacy and security in the evolving field of machine learning.
Syllabus
Information Leakage of Neural Networks by Johan Östman
Taught by
GAIA
Related Courses
Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure ModesLinkedIn Learning How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube Deep Learning New Frontiers
Alexander Amini via YouTube Deep Learning New Frontiers
Alexander Amini via YouTube MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube