Computers are Stupid - Protecting "AI" from Itself
Offered By: GOTO Conferences via YouTube
Course Description
Overview
Explore the critical aspects of AI security and ethics in this thought-provoking conference talk from GOTO Berlin 2018. Delve into the field of adversarial learning, examining how easily artificial intelligence can be fooled and the challenges in creating robust, secure neural networks. Investigate the potential risks machine learning poses to data privacy and ethical data use, including the implications of GDPR. Learn about the hype surrounding AI and its real-world applications, from Google Translate to self-driving cars. Discover why computers can be considered "stupid" and how this impacts AI development. Examine adversarial examples in biometrics and the potential for malicious intent. Address issues of bias in AI systems and privacy concerns in linear regression models. Explore potential solutions, including API access control and homomorphic encryption. Consider the importance of interdisciplinary panels and paper distribution in advancing AI security and ethics. Gain valuable insights into the complexities of AI development and the ongoing efforts to protect it from vulnerabilities and misuse.
Syllabus
Introduction
The Hype
Example A Google Translate
Selfdriving cars
Computers are stupid
Adversarial examples
Biometrics
Malicious intent
Bias
Privacy
Linear regression
How to solve this problem
Access to API
Homomorphic Encryption
Interdisciplinary Panels
Paper Distribution
Taught by
GOTO Conferences
Related Courses
Introduction to Data Analytics for BusinessUniversity of Colorado Boulder via Coursera Digital and the Everyday: from codes to cloud
NPTEL via Swayam Systems and Application Security
(ISC)² via Coursera Protecting Health Data in the Modern Age: Getting to Grips with the GDPR
University of Groningen via FutureLearn Teaching Impacts of Technology: Data Collection, Use, and Privacy
University of California, San Diego via Coursera