Getting Robust - Securing Neural Networks Against Adversarial Attacks
Offered By: University of Melbourne via YouTube
Course Description
Overview
Explore the critical topic of securing neural networks against adversarial attacks in this 49-minute seminar presented by Dr. Andrew Cullen, Research Fellow in Adversarial Machine Learning at the University of Melbourne. Delve into the vulnerabilities of machine learning systems to adversarial attacks and learn how these attacks can manipulate model outputs in ways that wouldn't affect human decision-making. Gain insights into various adversarial attacks and defense strategies across different domains, and understand how to incorporate adversarial behavior considerations into research and development work. Cover key concepts such as deep learning applications, deanonymization, accuracy vs. robustness, certified robustness, differential privacy, and training time attacks. Discover practical examples and methods like polytope bounding and test time samples to enhance the security of neural networks.
Syllabus
Introduction
Meet Andrew
Deep Learning Applications
Adversarial Learning
Deanonymization
Tay
Simon Wecker
What is an adversarial attack
Examples of adversarial attacks
Why adversarial attacks exist
Accuracy
Accuracy Robustness
Adversarial Attacks
Adversarial Defense
Certified Robustness
Differential Privacy
Differential Privacy Equation
Other Methods
Example
Polytope Bounding
Test Time Samples
Training Time Attacks
Conclusion
Taught by
The University of Melbourne
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent