Deep Learning Robustness Verification for Few-Pixel Attacks
Offered By: ACM SIGPLAN via YouTube
Course Description
Overview
Explore a groundbreaking approach to verifying the robustness of neural networks against few-pixel attacks in this 18-minute video presentation from OOPSLA 2023. Delve into the innovative Calzone method, developed by researchers from Technion, Israel, which offers the first sound and complete analysis for L0 adversarial attacks. Learn how this technique leverages dynamic programming, sampling, and covering designs to efficiently verify network robustness, typically completing within minutes for most cases. Discover how Calzone outperforms existing methods, scaling to handle challenging instances where traditional approaches fail. Gain insights into the importance of robustness verification in deep learning and its implications for creating more secure and reliable neural networks.
Syllabus
[OOPSLA23] Deep Learning Robustness Verification for Few-Pixel Attacks
Taught by
ACM SIGPLAN
Related Courses
Algorithms: Design and Analysis, Part 2Stanford University via Coursera Discrete Optimization
University of Melbourne via Coursera Conception et mise en œuvre d'algorithmes.
École Polytechnique via Coursera Computability, Complexity & Algorithms
Georgia Institute of Technology via Udacity Discrete Inference and Learning in Artificial Vision
École Centrale Paris via Coursera