Evaluating Neural Network Robustness - Targeted Attacks and Defenses
Offered By: University of Central Florida via YouTube
Course Description
Overview
Explore the robustness of neural networks in this 20-minute lecture from the University of Central Florida. Delve into targeted attack metrics, existing attacks like Fast Gradient Sign and Jacobian-based Saliency Map Attack, and new approaches to evaluating neural network vulnerability. Examine objective functions, box constraints, and methods for finding the best combination of attacks. Learn about attack evaluation techniques and their application to ImageNet datasets. Conclude with an introduction to defensive distillation as a potential countermeasure against adversarial attacks.
Syllabus
Intro
Summary: Terminology (cont.) Targeted Attack Metrics
Existing Attacks
Fast Gradient Sign (FGS)
Jacobian-based Saliency Map Attack (SMA)
New approach
Objective Functions Explored
Dealing with Box Constraints: x+8 € [0, 1]
Finding Best Combination
Different Attacks (Cont.)
Attack Evaluation
Attacks on ImageNet
Defensive Distillation
Taught by
UCF CRCV
Tags
Related Courses
Inference with Torch-TensorRT Deep Learning Prediction for Beginners - CPU vs CUDA vs TensorRTPython Simplified via YouTube AlexNet and ImageNet Explained
James Briggs via YouTube Analysis of Large-Scale Visual Recognition - Bay Area Vision Meeting
Meta via YouTube Introduction to Neural Networks for Computer Vision - Part I
University of Central Florida via YouTube Fast Is Better Than Free: Revisiting Adversarial Training - CAP6412 Spring 2021
University of Central Florida via YouTube