YoVDO

Just-So Stories for AI - Explaining Black-Box Predictions

Offered By: Strange Loop Conference via YouTube

Tags

Strange Loop Conference Courses Machine Learning Courses AI Ethics Courses Decision Trees Courses Random Forests Courses

Course Description

Overview

Explore state-of-the-art strategies for explaining black-box machine learning model decisions in this 42-minute Strange Loop Conference talk by Sam Ritchie. Delve into the challenges of interpreting complex algorithms and the importance of demanding plausible explanations for AI-driven decisions. Learn about various techniques for generating explanations, including decision trees, random forests, and LIME. Examine the parallels between AI rationalization and human decision-making processes, and discuss the ethical implications of relying on unexplainable AI systems. Understand the significance of model interpretability in maintaining human control over technological advancements, ensuring compliance with data protection regulations, and clarifying our ethical standards in an increasingly AI-driven world.

Syllabus

Introduction
Outline
Stripe
Rules
Models
Decision Trees
Random Forest
Explanations
Intuition
Structure
Algorithm
Explanation
Elephant Trunk
Observation
Lime
AI Rationalisation
Frogger
Methow submodel interpretability
Human interpretability
Peter Norvig
Roger Sperry
Homo Deus
Algorithms
Explanations are harmful
Why explanations are important
Human compatible AI
Data protection regulation
Clarify our ethics
Conclusion


Taught by

Strange Loop Conference

Tags

Related Courses

Knowledge-Based AI: Cognitive Systems
Georgia Institute of Technology via Udacity
AI for Everyone: Master the Basics
IBM via edX
Introducción a La Inteligencia Artificial (IA)
IBM via Coursera
AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn
Artificial Intelligence Ethics in Action
LearnQuest via Coursera