Securing ML Workloads with Kubeflow and MLOps - Pwned By Statistics
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the intersection of machine learning security and MLOps in this 51-minute conference talk. Delve into the challenges of ML implementation and learn how Kubeflow and MLOps practices can enhance the security of your machine learning workloads. Discover various ML models, including Circle Detector and Wolf vs Husky Detector, and examine potential flaws in federated learning. Gain insights into building secure pipelines, extracting models, and understanding different types of attacks such as distillation, model extraction, and hidden data attacks. Investigate techniques for secret memorization, leakage detection, and implementing differential privacy. Analyze the importance of threat modeling in ML systems and explore concepts like AutoML, AI models, and data drift. Conclude with a comprehensive summary and engage in a Q&A session to deepen your understanding of securing ML workloads through Kubeflow and MLOps strategies.
Syllabus
Introduction
Why ML
Why ML is hard
MLOps
Circle Detector
Wolf vs Husky Detector
Flaws in Federated Learning
Additional Techniques
Building a Pipeline
Extracting Your Model
Distillation Attack
Model Extraction Attack
Hidden Data Attack
Secret Memorization
Leakage Detection
Summary
Questions
AutoML
AI Models
Data Drift
Attack Systems
Differential Privacy
Threat Modeling
ML Ops
Outro
Taught by
Linux Foundation
Tags
Related Courses
How to Detect Silent Failures in ML ModelsData Science Dojo via YouTube Dataset Management for Computer Vision - Important Component to Delivering Computer Vision Solutions
Open Data Science via YouTube Testing ML Models in Production - Detecting Data and Concept Drift
Databricks via YouTube Ekya - Continuous Learning of Video Analytics Models on Edge Compute Servers
USENIX via YouTube Building and Maintaining High-Performance AI
Data Science Dojo via YouTube