Smashing the ML Stack for Fun and Lawsuits
Offered By: Black Hat via YouTube
Course Description
Overview
Explore the legal risks and ethical considerations of adversarial machine learning research in this Black Hat conference talk. Delve into the potential legal consequences researchers face when targeting commercial ML systems from major tech companies. Analyze how existing laws apply to the testing of deployed ML systems, and examine the expectations of vendors regarding system usage. Learn about various attack vectors like evasion, poisoning, and model inversion. Gain valuable insights into relevant legal frameworks, including contracts, the Computer Fraud and Abuse Act, and Section 1201. Conclude with high-level takeaways to navigate the complex intersection of ML security research and legal compliance.
Syllabus
Intro
Demo
Evasion Tax
Poisoning
Model Inversion
Summary
Disclaimer
Legal Questions
Contracts
Computer Fraud Abuse Act
Section 1201
HighLevel Takeaways
Taught by
Black Hat
Related Courses
AI Security Engineering - Modeling - Detecting - Mitigating New VulnerabilitiesRSA Conference via YouTube Trustworthy Machine Learning: Challenges and Frameworks
USENIX Enigma Conference via YouTube Learning Under Data Poisoning
Simons Institute via YouTube Understanding Security Threats Against Machine - Deep Learning Applications
Devoxx via YouTube Breaking NBAD and UEBA Detection
YouTube