Improving Transferability of Adversarial Examples with Input Diversity - CAP6412 Spring 2021
Offered By: University of Central Florida via YouTube
Course Description
Overview
Explore a lecture on improving the transferability of adversarial examples using input diversity in machine learning. Delve into the objectives, transformations, and related work in this field. Examine the methodology behind the family of Fast Gradient Sign Method (FGSM) and diverse input patterns. Understand the relationships between different approaches and learn about attacking ensemble networks. Review experimental setups, including attacks on single and ensemble networks, as well as ablation studies. Gain insights from the NIPS 2017 adversarial competition and draw conclusions on the effectiveness of input diversity in enhancing adversarial example transferability.
Syllabus
Introduction
Objectives
Transformations
Related Work
Methodology
Family of FGSM
Diverse Inputs Patterns Methods
Relationships
Attacking on Ensemble Networks
Experiment - Setup
Attacking on Single Networks
Attacking a Ensemble of Network
Ablation Studies
NIPS 2017 adversarial competition
Conclusion
Taught by
UCF CRCV
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity