Fostering Trust via Explainable ML Inferences
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the concept of explainable Machine Learning inferences as a means to foster trust and enhance perceived value in this 41-minute conference talk. Delve into the dynamics of trust between service providers (Trustors) and service consumers (Trustees), examining a practical, quantifiable framework for implementation. Learn how Trustors must balance being both trusting and trustworthy, while Trustees are not bound by such requirements. Discover the challenges faced by Trustors in providing services that exceed the minimum trust threshold necessary for establishing and maintaining client relationships. Gain insights into how explainability in ML can facilitate more focused conversations with customers, particularly when addressing subpar inferences.
Syllabus
Fostering Trust via Explainable ML Inferences - Dalmo Cirne, Workday
Taught by
Linux Foundation
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent