Reliable visual analytics, a prerequisite for outcome assessment of engineering systems

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Various evaluation approaches exist for multi-purpose visual analytics (VA) frameworks. They are based on empirical studies in information visualization or on community activities, for example, VA Science and Technology Challenge (2006-2014) created as a community evaluation resource to “decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators” 1. In this paper, we propose to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results. For this purpose, trust in VA outcome should be established, which can be done by following the (meta-)design principles of a human-centered verification and validation assessment and also in dependence on users’ task models and interaction styles, since the possibility to work with the visualization interactively is an integral part of VA. To define reliable VA, we point out various dimensions of reliability along with their quality criteria, requirements, attributes and metrics. Several software packages are used to illustrate the concepts.

Cite

CITATION STYLE

APA

Auer, E., Luther, W., & Weyers, B. (2020). Reliable visual analytics, a prerequisite for outcome assessment of engineering systems. Acta Cybernetica, 24(3), 287–314. https://doi.org/10.14232/ACTACYB.24.3.2020.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free