MLSecOps - Automated Online and Offline ML Model Evaluations on Kubernetes
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore MLSecOps and automated ML model evaluations on Kubernetes in this conference talk. Delve into the intersection of machine learning, DevOps, infrastructure, and security, understanding the importance of robust MLSecOps infrastructure to prevent data loss through model reversal. Learn how to overcome the complexities of monitoring model security on Kubernetes at scale by implementing automated online real-time evaluations and detailed offline analysis. Discover the use of KServe, Knative, Apache Kafka, and Trusted-AI tools for serving ML models, persisting payloads, and automating evaluations in production environments. Gain insights into real-time model explanations, fairness detection, and adversarial detection techniques to visualize and report potential security threats over time.
Syllabus
Introduction
Power of Choice
Security in AI
Demo
ML Pipelines
ML Pipeline Metrics
CaseUp
Offline ML Evaluation
Online ML Evaluation
Case Service
Predictors
Fairness Detections
Loggers
Data ingestion
Demonstration
Trust AI
Istio
Taught by
Linux Foundation
Tags
Related Courses
Serverless Machine Learning Model Inference on Kubernetes with KServeDevoxx via YouTube Machine Learning in Fastly's Compute@Edge
Linux Foundation via YouTube ModelMesh: Scalable AI Model Serving on Kubernetes
Linux Foundation via YouTube Creating a Custom Serving Runtime in KServe ModelMesh - Hands-On Experience
Linux Foundation via YouTube Integrating High Performance Feature Stores with KServe Model Serving
Linux Foundation via YouTube