YoVDO

Evaluating Neural Network Robustness - Targeted Attacks and Defenses

Offered By: University of Central Florida via YouTube

Tags

Neural Networks Courses ImageNet Courses Adversarial Attacks Courses

Course Description

Overview

Explore the robustness of neural networks in this 20-minute lecture from the University of Central Florida. Delve into targeted attack metrics, existing attacks like Fast Gradient Sign and Jacobian-based Saliency Map Attack, and new approaches to evaluating neural network vulnerability. Examine objective functions, box constraints, and methods for finding the best combination of attacks. Learn about attack evaluation techniques and their application to ImageNet datasets. Conclude with an introduction to defensive distillation as a potential countermeasure against adversarial attacks.

Syllabus

Intro
Summary: Terminology (cont.) Targeted Attack Metrics
Existing Attacks
Fast Gradient Sign (FGS)
Jacobian-based Saliency Map Attack (SMA)
New approach
Objective Functions Explored
Dealing with Box Constraints: x+8 € [0, 1]
Finding Best Combination
Different Attacks (Cont.)
Attack Evaluation
Attacks on ImageNet
Defensive Distillation


Taught by

UCF CRCV

Tags

Related Courses

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning
How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube