Poisoned Pickles Make You Ill - Securing Machine Learning Model Serialization
Offered By: Linux Foundation via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the security risks associated with using the pickle module for serializing and distributing machine learning models in this 32-minute conference talk by Adrian Gonzalez-Martin from Seldon Technologies Ltd. Discover how easily pickles can be poisoned and used to inject arbitrary code into ML pipelines, posing significant threats to data science projects. Learn about the challenges in detecting poisoned pickles and gain insights into emerging tools and techniques for generating safer serialized models. Drawing inspiration from DevOps practices, understand how to implement trust-or-discard processes to enhance security. Gain practical knowledge on protecting your ML models from potential attacks and creating more secure and reliable pickles for your data science workflows.
Syllabus
Poisoned Pickles Make You Ill - Adrian Gonzalez-Martin, Seldon Technologies Ltd
Taught by
Linux Foundation
Tags
Related Courses
Machine Learning Learning PlanAmazon Web Services via AWS Skill Builder Machine Learning Security
Amazon Web Services via AWS Skill Builder Machine Learning Security (French)
Amazon Web Services via AWS Skill Builder Machine Learning Security (German)
Amazon Web Services via AWS Skill Builder Machine Learning Security (Indonesian)
Amazon Web Services via AWS Skill Builder