Poisoned Pickles - Security Risks and Protections for Serialized ML Models
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the security risks and protective measures associated with pickle serialization in machine learning during this 27-minute conference talk. Delve into the widespread use of the pickle module for serializing and distributing ML models, and understand the vulnerabilities that make it easy for attackers to inject arbitrary code into ML pipelines. Learn about the challenges in detecting poisoned pickles and discover emerging tools and techniques inspired by DevOps practices to generate safer, higher-quality pickles. Gain practical insights on how to protect your models from attacks and implement trust-or-discard processes to enhance the security of your ML workflows.
Syllabus
Poisoned Pickles Make You Ill - Adrian Gonzalez-Martin, Seldon
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Build and operate machine learning solutions with Azure Machine LearningMicrosoft via Microsoft Learn Machine Learning Learning Plan
Amazon Web Services via AWS Skill Builder Machine Learning Security (German)
Amazon Web Services via AWS Skill Builder Machine Learning Security (Simplified Chinese)
Amazon Web Services via AWS Skill Builder Machine Learning Security (Indonesian)
Amazon Web Services via AWS Skill Builder