Just-So Stories for AI - Explaining Black-Box Predictions
Offered By: Strange Loop Conference via YouTube
Course Description
Overview
Explore state-of-the-art strategies for explaining black-box machine learning model decisions in this 42-minute Strange Loop Conference talk by Sam Ritchie. Delve into the challenges of interpreting complex algorithms and the importance of demanding plausible explanations for AI-driven decisions. Learn about various techniques for generating explanations, including decision trees, random forests, and LIME. Examine the parallels between AI rationalization and human decision-making processes, and discuss the ethical implications of relying on unexplainable AI systems. Understand the significance of model interpretability in maintaining human control over technological advancements, ensuring compliance with data protection regulations, and clarifying our ethical standards in an increasingly AI-driven world.
Syllabus
Introduction
Outline
Stripe
Rules
Models
Decision Trees
Random Forest
Explanations
Intuition
Structure
Algorithm
Explanation
Elephant Trunk
Observation
Lime
AI Rationalisation
Frogger
Methow submodel interpretability
Human interpretability
Peter Norvig
Roger Sperry
Homo Deus
Algorithms
Explanations are harmful
Why explanations are important
Human compatible AI
Data protection regulation
Clarify our ethics
Conclusion
Taught by
Strange Loop Conference
Tags
Related Courses
Practical Machine LearningJohns Hopkins University via Coursera Detección de objetos
Universitat Autònoma de Barcelona (Autonomous University of Barcelona) via Coursera Practical Machine Learning on H2O
H2O.ai via Coursera Modélisez vos données avec les méthodes ensemblistes
CentraleSupélec via OpenClassrooms Introduction to Machine Learning for Coders!
fast.ai via Independent