Explorations on Single Usability Metrics
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore a conference talk from the ACM CHI Conference on Human Factors in Computing Systems that delves into the challenges and considerations of developing Single Usability Metrics (SUM) for product evaluation. Learn about Microsoft's long-term summative evaluation program, which aimed to generate SUM scores across products over time. Discover the issues encountered during this process, including error rate reliability, difficulties in establishing objective time-on-task targets, and scale anchoring problems. Examine how these challenges impacted the communication of SUM scores and led to the exploration of alternative metrics using simple thresholds developed from anchor text and inter-metric correlations. Gain insights into various aspects of usability measurement, including counting errors, average satisfaction, time on task limitations, and methods for measuring user experience and satisfaction. Conclude with a summary of findings and participate in a Q&A session to further understand the complexities of single usability metrics in computing systems.
Syllabus
Introduction
Background
Metrics
Why Metrics
Metrics Statistics
Example
Challenges 1 Counting Errors
Challenges 2 Average Satisfaction
Data Set
Time on Task
Limitations
What is good time
Measuring the user experience
Measuring satisfaction
Summary
QA
Taught by
ACM SIGCHI
Related Courses
Mobile Application Experiences Part 1: From a Domain to an App IdeaMassachusetts Institute of Technology via edX UX-Design for Business
Fraunhofer IESE via Independent Design Principles: an Introduction
University of California, San Diego via Coursera Interaction Design Capstone Project
University of California, San Diego via Coursera Mobile Application Experiences Part 4: Understanding Use
Massachusetts Institute of Technology via edX