Robustness and Security for AI - Addressing Edge Cases in Mission-Critical Systems
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore the critical importance of addressing edge cases in AI systems for mission-critical applications. Delve into the concept of robustness and security in artificial intelligence, examining both naturally occurring and malicious edge cases. Learn about the limitations of traditional accuracy metrics in predicting real-world model performance and discover strategies for developing more resilient AI models. Investigate emerging regulations and penalties surrounding Responsible AI, and understand the necessity of articulating potential risks and mitigation steps. Gain insights into robustness metrics, problem classes, and model failure bias to shape AI towards more benign failure cases. Examine specific examples of edge cases, including resizing attacks and adversarial patch attacks, while considering the implications of data poisoning and the challenges of statistical interpretation in AI. Explore the balance between protecting innovation and ensuring the responsible development and deployment of AI technologies for a better world.
Syllabus
Intro
Background
Unique Vulnerabilities of Al
Accuracy Robustness
Probable Improbable Edge Cases
Naturally Occurring Edge Cases
Resizing Attack
Adversarial Patch Attack
Robustness Assessment
Data Poisoning Audit
Lies, Damned Lies, and Statistics
Regulation and Compliance
Protecting the Pace of Al Innovation
For A Better World
Taught by
MLOps World: Machine Learning in Production
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity