YoVDO

Shadow Vulnerabilities in AI/ML Data Stacks - What You Don't Know Can Hurt You

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Machine Learning Security Courses Remote Code Execution Courses eBPF Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the hidden security risks in AI/ML data stacks through this informative conference talk. Delve into the world of shadow vulnerabilities in open-source AI software, including the inherent Remote Code Execution (RCE) risks in model serving components. Examine common security anti-patterns in AI engineering, such as unclassified CVEs and impractical security patches. Learn about new methods for improved security hygiene, including checkpoint formats like SavedModel and SafeTensors. Discover why traditional security approaches fall short in analyzing model checkpoints, and see real-code demonstrations of how runtime context is crucial for detecting these silent vulnerabilities. Gain insights into leveraging eBPF and open-source tooling to enhance AI/ML data stack security.

Syllabus

Shadow Vulnerabilities in AI/ML Data Stacks - What You Don’t Know... Avi Lumelsky & Nitzan Mousseri


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Build and operate machine learning solutions with Azure Machine Learning
Microsoft via Microsoft Learn
Machine Learning Learning Plan
Amazon Web Services via AWS Skill Builder
Machine Learning Security (German)
Amazon Web Services via AWS Skill Builder
Machine Learning Security (Simplified Chinese)
Amazon Web Services via AWS Skill Builder
Machine Learning Security (Indonesian)
Amazon Web Services via AWS Skill Builder