YoVDO

Poisoned Pickles - Security Risks and Protections for Serialized ML Models

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Machine Learning Security Courses Cybersecurity Courses Python Courses Code Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the security risks and protective measures associated with pickle serialization in machine learning during this 27-minute conference talk. Delve into the widespread use of the pickle module for serializing and distributing ML models, and understand the vulnerabilities that make it easy for attackers to inject arbitrary code into ML pipelines. Learn about the challenges in detecting poisoned pickles and discover emerging tools and techniques inspired by DevOps practices to generate safer, higher-quality pickles. Gain practical insights on how to protect your models from attacks and implement trust-or-discard processes to enhance the security of your ML workflows.

Syllabus

Poisoned Pickles Make You Ill - Adrian Gonzalez-Martin, Seldon


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Rootkits and Stealth Apps: Creating & Revealing 2.0 HACKING
Udemy
Game Hacking: Cheat Engine Game Hacking Basics
Udemy
Reverse Engineering and Memory Hacking with Cheat Engine
Udemy
The Evolution of the Software Supply Chain Attack
Pluralsight
Web Security
Stanford University via YouTube