Interactive Explainable AI - Enhancing Trust and Decision-Making
Offered By: Open Data Science via YouTube
Course Description
Overview
Explore the world of Interactive Explainable AI in this 37-minute talk by Dr. Meg Kurdziolek. Dive into the importance of understanding AI decision-making processes, discover cutting-edge Explainable AI (XAI) techniques, and learn how human factors research influences trust in AI systems. Gain insights into practical interaction design strategies for enhancing XAI services, perfect for AI enthusiasts, data scientists, and machine learning professionals. The presentation covers topics such as the need for XAI, human factors in AI and XAI, historical context of explaining complex concepts, the user experience of XAI, and real-world examples of interactive XAI implementations. Conclude with valuable parting thoughts on the future of explainable and trustworthy AI technologies.
Syllabus
- Intro
- What do we need XAI for?
- Human factors of AI and XAI
- We’ve actually been explaining complex things for a long time
- The UX of XAI
- Examples of interactive XAI
- Parting Thoughts
Taught by
Open Data Science
Related Courses
User Experience for the WebOpen2Study Mobile Application Experiences Part 1: From a Domain to an App Idea
Massachusetts Institute of Technology via edX UX-Design for Business
Fraunhofer IESE via Independent User Experience (UX) Design: Human Factors and Culture in Design | 设计的人因与文化
Tsinghua University via edX Introduction to User Experience Design
Georgia Institute of Technology via Coursera