YoVDO

The Practical Divide between Adversarial ML Research and Security Practice - A Red Team Perspective

Offered By: USENIX Enigma Conference via YouTube

Tags

USENIX Enigma Conference Courses Cybersecurity Courses Access Control Courses Adversarial Machine Learning Courses

Course Description

Overview

Explore a 21-minute conference talk from USENIX Enigma 2021 that delves into the practical divide between adversarial machine learning research and security practices from a red team perspective. Gain insights from Hyrum Anderson of Microsoft as he discusses the significant gaps between academic advancements and industry needs in ML security. Learn about sobering lessons from a Machine Learning Red Team engagement at Microsoft, including the importance of traditional security measures and the low awareness of ML vulnerabilities outside of security applications. Discover why most organizations struggle to protect their ML models despite extensive research in the field, and understand the challenges in translating academic tools and techniques to business needs. Examine real-world examples, red team attacks, and lessons learned to better grasp the current state of ML security and its implications for corporations and government entities.

Syllabus

Introduction
A fundamental paradigm mismatch
The state of ML security
Red teaming
Example
Red Team Attack
Lessons Learned
Health Monitoring
Data
Conclusion


Taught by

USENIX Enigma Conference

Related Courses

Adventures in Authentication and Authorization
USENIX Enigma Conference via YouTube
Navigating the Sandbox Buffet
USENIX Enigma Conference via YouTube
Meaningful Hardware Privacy for a Smart and Augmented Future
USENIX Enigma Conference via YouTube
Working on the Frontlines - Privacy and Security with Vulnerable Populations
USENIX Enigma Conference via YouTube
Myths and Lies in InfoSec
USENIX Enigma Conference via YouTube