Interactive Explainable AI - Enhancing Trust and Decision-Making
Offered By: Open Data Science via YouTube
Course Description
Overview
Explore the world of Interactive Explainable AI in this 37-minute talk by Dr. Meg Kurdziolek. Dive into the importance of understanding AI decision-making processes, discover cutting-edge Explainable AI (XAI) techniques, and learn how human factors research influences trust in AI systems. Gain insights into practical interaction design strategies for enhancing XAI services, perfect for AI enthusiasts, data scientists, and machine learning professionals. The presentation covers topics such as the need for XAI, human factors in AI and XAI, historical context of explaining complex concepts, the user experience of XAI, and real-world examples of interactive XAI implementations. Conclude with valuable parting thoughts on the future of explainable and trustworthy AI technologies.
Syllabus
- Intro
- What do we need XAI for?
- Human factors of AI and XAI
- We’ve actually been explaining complex things for a long time
- The UX of XAI
- Examples of interactive XAI
- Parting Thoughts
Taught by
Open Data Science
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent