YoVDO

Defending Against Adversarial Model Attacks Using Kubeflow

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Conference Talks Courses Cybersecurity Courses Adversarial Attacks Courses Kubeflow Pipelines Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on defending against adversarial model attacks using Kubeflow. Learn about the importance of AI algorithm robustness in critical domains like self-driving cars, facial recognition, and hiring. Discover how to build a pipeline resistant to adversarial attacks by leveraging Kubeflow Pipelines and integrating with LFAI Adversarial Robustness Toolbox (ART). Gain insights into testing machine learning model's adversarial robustness in production on Kubeflow Serving using Payload logging and ART. Cover topics including Trusted AI, Open Governance, Security, Toolkit, and other related projects. Conclude with a Kubeflow survey and a practical demonstration.

Syllabus

Introduction
Trusted AI
Open Governance
Security
Toolkit
Other Projects
Adversarial robustness toolbox
Kubeflow Survey
Demo


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning
How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube