Smashing the ML Stack for Fun and Lawsuits
Offered By: Black Hat via YouTube
Course Description
Overview
Explore the legal risks and ethical considerations of adversarial machine learning research in this Black Hat conference talk. Delve into the potential legal consequences researchers face when targeting commercial ML systems from major tech companies. Analyze how existing laws apply to the testing of deployed ML systems, and examine the expectations of vendors regarding system usage. Learn about various attack vectors like evasion, poisoning, and model inversion. Gain valuable insights into relevant legal frameworks, including contracts, the Computer Fraud and Abuse Act, and Section 1201. Conclude with high-level takeaways to navigate the complex intersection of ML security research and legal compliance.
Syllabus
Intro
Demo
Evasion Tax
Poisoning
Model Inversion
Summary
Disclaimer
Legal Questions
Contracts
Computer Fraud Abuse Act
Section 1201
HighLevel Takeaways
Taught by
Black Hat
Related Courses
Supreme Court's Van Buren Ruling on the CFAA - Implications and AnalysisAssociation for Computing Machinery (ACM) via YouTube What Public Interest AI Auditors Can Learn from Security Testing - Legislative and Practical Wins
USENIX Enigma Conference via YouTube How Federal Prosecutors Use The CFAA
Black Hat via YouTube The Big Chill - Legal Landmines that Stifle Security Research and How to Disarm Them
Black Hat via YouTube What Security Researchers Need to Know About Anti-Hacking Law
Black Hat via YouTube