Explorations on Single Usability Metrics
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore a conference talk from the ACM CHI Conference on Human Factors in Computing Systems that delves into the challenges and considerations of developing Single Usability Metrics (SUM) for product evaluation. Learn about Microsoft's long-term summative evaluation program, which aimed to generate SUM scores across products over time. Discover the issues encountered during this process, including error rate reliability, difficulties in establishing objective time-on-task targets, and scale anchoring problems. Examine how these challenges impacted the communication of SUM scores and led to the exploration of alternative metrics using simple thresholds developed from anchor text and inter-metric correlations. Gain insights into various aspects of usability measurement, including counting errors, average satisfaction, time on task limitations, and methods for measuring user experience and satisfaction. Conclude with a summary of findings and participate in a Q&A session to further understand the complexities of single usability metrics in computing systems.
Syllabus
Introduction
Background
Metrics
Why Metrics
Metrics Statistics
Example
Challenges 1 Counting Errors
Challenges 2 Average Satisfaction
Data Set
Time on Task
Limitations
What is good time
Measuring the user experience
Measuring satisfaction
Summary
QA
Taught by
ACM SIGCHI
Related Courses
Introduction to Statistics: Descriptive StatisticsUniversity of California, Berkeley via edX Analytical Chemistry / Instrumental Analysis
Rice University via Coursera Estadística para investigadores: Todo lo que siempre quiso saber
Universidad de Salamanca via Miríadax Valoración de futbolistas
Universitat Politècnica de València via UPV [X] Configuring the World, part 1: Comparative Political Economy
Leiden University via Coursera