Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
Offered By: LinkedIn Learning
Course Description
Overview
Learn how and why machine learning and artificial intelligence technology fails and understand ways to make these systems more secure and resilient.
Syllabus
Introduction
- Machine learning security concerns
- What you should know
- How systems can fail and how to protect them
- Why does ML security matter
- Attacks vs. unintentional failure modes
- Security goals for ML: CIA
- Perturbation attacks and AUPs
- Poisoning attacks
- Reprogramming neural nets
- Physical domain (3D adversarial objects)
- Supply chain attacks
- Model inversion
- System manipulation
- Membership inference and model stealing
- Backdoors and existing exploits
- Reward hacking
- Side effects in reinforcement learning
- Distributional shifts and incomplete testing
- Overfitting/underfitting
- Data bias considerations
- Effective techniques for building resilience in ML
- ML dataset hygiene
- ML adversarial training
- ML access control to APIs
- Next steps
Taught by
Diana Kelley
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX