YoVDO

Just-So Stories for AI - Explaining Black-Box Predictions

Offered By: Strange Loop Conference via YouTube

Tags

Strange Loop Conference Courses Machine Learning Courses AI Ethics Courses Decision Trees Courses Random Forests Courses

Course Description

Overview

Explore state-of-the-art strategies for explaining black-box machine learning model decisions in this 42-minute Strange Loop Conference talk by Sam Ritchie. Delve into the challenges of interpreting complex algorithms and the importance of demanding plausible explanations for AI-driven decisions. Learn about various techniques for generating explanations, including decision trees, random forests, and LIME. Examine the parallels between AI rationalization and human decision-making processes, and discuss the ethical implications of relying on unexplainable AI systems. Understand the significance of model interpretability in maintaining human control over technological advancements, ensuring compliance with data protection regulations, and clarifying our ethical standards in an increasingly AI-driven world.

Syllabus

Introduction
Outline
Stripe
Rules
Models
Decision Trees
Random Forest
Explanations
Intuition
Structure
Algorithm
Explanation
Elephant Trunk
Observation
Lime
AI Rationalisation
Frogger
Methow submodel interpretability
Human interpretability
Peter Norvig
Roger Sperry
Homo Deus
Algorithms
Explanations are harmful
Why explanations are important
Human compatible AI
Data protection regulation
Clarify our ethics
Conclusion


Taught by

Strange Loop Conference

Tags

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent