Enriching task models with usability and user experience evaluation data

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Evaluation results focusing on usability and user experience are often difficult to be taken into account during an iterative design process. This is due to the fact that evaluation exploits concrete artefacts (prototype or system) while design and development are based on more abstract descriptions such as task models or software models. As concrete data cannot be represented, evaluation results are just discarded. This paper addresses the problem of discrepancy between abstract view of task models and concrete data produced in evaluations by first, describing the requirements for a task modelling notation: (a) representation of data for each individual participant, (b) representation of aggregated data for one evaluation as well as (c) several evaluations and (d) the need to visualize multi-dimensional data from the evaluation as well as the interactive system gathered during runtime. Second: by showing how the requirements were integrated in a task modelling tool. Using an example from an experimental evaluation possible usages of the tool are demonstrated.

Cite

CITATION STYLE

APA

Bernhaupt, R., Palanque, P., Drouet, D., & Martinie, C. (2019). Enriching task models with usability and user experience evaluation data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11262 LNCS, pp. 146–163). Springer Verlag. https://doi.org/10.1007/978-3-030-05909-5_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free