MLSecOps - Automated Online and Offline ML Model Evaluations on Kubernetes
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore MLSecOps and automated ML model evaluations on Kubernetes in this conference talk. Delve into the intersection of machine learning, DevOps, infrastructure, and security, understanding the importance of robust MLSecOps infrastructure to prevent data loss through model reversal. Learn how to overcome the complexities of monitoring model security on Kubernetes at scale by implementing automated online real-time evaluations and detailed offline analysis. Discover the use of KServe, Knative, Apache Kafka, and Trusted-AI tools for serving ML models, persisting payloads, and automating evaluations in production environments. Gain insights into real-time model explanations, fairness detection, and adversarial detection techniques to visualize and report potential security threats over time.
Syllabus
Introduction
Power of Choice
Security in AI
Demo
ML Pipelines
ML Pipeline Metrics
CaseUp
Offline ML Evaluation
Online ML Evaluation
Case Service
Predictors
Fairness Detections
Loggers
Data ingestion
Demonstration
Trust AI
Istio
Taught by
Linux Foundation
Tags
Related Courses
Macroeconometric ForecastingInternational Monetary Fund via edX Machine Learning With Big Data
University of California, San Diego via Coursera Data Science at Scale - Capstone Project
University of Washington via Coursera Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera Data Science in Action - Building a Predictive Churn Model
SAP Learning