AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities
Offered By: RSA Conference via YouTube
Course Description
Overview
Explore the critical landscape of AI security engineering in this 54-minute RSA Conference talk. Delve into the modeling, detection, and mitigation of new vulnerabilities in AI and machine learning systems. Learn about customer compromise through adversarial machine learning, higher-order bias and fairness concerns, and physical safety and reliability issues stemming from unmitigated security and privacy threats. Examine adversarial audio examples, failure modes in machine learning, and various adversarial attack classifications. Investigate data poisoning attacks on model availability and integrity, and discover proactive defense strategies. Gain insights into threat taxonomy, adversarial goals, and the ongoing race between attacks and defenses. Understand the concept of ideal provable defense and explore security best practices, including defining input/output bounds and threat modeling AI/ML systems. Conclude with an overview of AI/ML pivots to the Security Development Lifecycle (SDL) Bug Bar, equipping you with essential knowledge to protect and defend AI services against emerging threats.
Syllabus
Intro
Customer Compromise via Adversarial ML-Case Study
Higher Order Bias/Fairness, Physical Safety & Reliability concerns stem from unmitigated Security and Privacy Threats
Adversarial Audio Examples
Failure Modes in Machine Learning
Adversarial Attack Classification
Data Poisoning: Attacking Model Availability
Data Poisoning: Attacking Model Integrity
Poisoning Model Integrity: Attack Example
Proactive Defenses
Threat Taxonomy
Adversarial Goals
A Race Between Attacks and Defenses
Ideal Provable Defense
Build upon the Details: Security Best Practices
Define lower/upper bounds of data input and output
Threat Modeling Al/ML Systems and Dependencies
Wrapping Up
AI/ML Pivots to the SDL Bug Bar
Taught by
RSA Conference
Related Courses
AI for CybersecurityJohns Hopkins University via Coursera Securing AI and Advanced Topics
Johns Hopkins University via Coursera Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning Responsible & Safe AI Systems
NPTEL via Swayam Intro to Testing Machine Learning Models
Test Automation University