Privacy Governance and Explainability in ML - AI
Offered By: Strange Loop Conference via YouTube
Course Description
Overview
Explore the complex intersection of privacy governance and explainability in machine learning and artificial intelligence in this 45-minute conference talk from Strange Loop. Delve into the challenges posed by GDPR and other data privacy regulations, particularly in the context of ML and AI systems. Examine methods for enhancing privacy, governing data used in ML/AI, and addressing potential bias in models. Learn about privacy by design, algorithmic fairness, and the role of developers and engineers in ensuring ethical AI practices. Discover techniques such as dynamic sampling, differential privacy, and multidimensional privacy analytics to mitigate privacy risks. Gain insights into building consumer trust and confidence in an increasingly complex technological landscape.
Syllabus
Introduction
Agenda
Why does it matter
Landscape of privacy risk
Privacy is more than security
Fundamental right to privacy
Trust context
Transparency consumer trust
Contextbased privacy
Privacy by design
Governance data optimization maturity
Privacy by design in retrospect
Current state of privacy
Machine learning and AI
Algorithmal fairness
Role of developers engineers
Seeking out risk
Types of data
Methods
Model Prediction Risk
Dynamic Sampling
Differential Privacy
Multidimensional Privacy Analytics
Life Cycle
Conclusion
Taught by
Strange Loop Conference
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent