Securing ML Workloads with Kubeflow and MLOps - Pwned By Statistics
Offered By: Linux Foundation via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intersection of machine learning security and MLOps in this 51-minute conference talk. Delve into the challenges of ML implementation and learn how Kubeflow and MLOps practices can enhance the security of your machine learning workloads. Discover various ML models, including Circle Detector and Wolf vs Husky Detector, and examine potential flaws in federated learning. Gain insights into building secure pipelines, extracting models, and understanding different types of attacks such as distillation, model extraction, and hidden data attacks. Investigate techniques for secret memorization, leakage detection, and implementing differential privacy. Analyze the importance of threat modeling in ML systems and explore concepts like AutoML, AI models, and data drift. Conclude with a comprehensive summary and engage in a Q&A session to deepen your understanding of securing ML workloads through Kubeflow and MLOps strategies.
Syllabus
Introduction
Why ML
Why ML is hard
MLOps
Circle Detector
Wolf vs Husky Detector
Flaws in Federated Learning
Additional Techniques
Building a Pipeline
Extracting Your Model
Distillation Attack
Model Extraction Attack
Hidden Data Attack
Secret Memorization
Leakage Detection
Summary
Questions
AutoML
AI Models
Data Drift
Attack Systems
Differential Privacy
Threat Modeling
ML Ops
Outro
Taught by
Linux Foundation
Tags
Related Courses
Introduction to AI/ML Toolkits with KubeflowLinux Foundation via edX Distributed Multi-worker TensorFlow Training on Kubernetes
Google via Google Cloud Skills Boost Leveraging Cloud-Based Machine Learning on Google Cloud Platform: Real World Applications
LinkedIn Learning Building End-to-end Machine Learning Workflows with Kubeflow
Pluralsight Smart Analytics, Machine Learning, and AI on GCP
Pluralsight