Vis ex machina: An analysis of trust in human versus algorithmically generated visualization recommendations

11Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

More visualization systems are simplifying the data analysis pro-cess by automatically suggesting relevant visualizations. However, little work has been done to understand if users trust these auto-mated recommendations. In this paper, we present the results of a crowd-sourced study exploring preferences and perceived quality of recommendations that have been positioned as either human-curated or algorithmically generated. We observe that while partic-ipants initially prefer human recommenders, their actions suggest an indiference for recommendation source when evaluating visual-ization recommendations. The relevance of presented information (e.g., the presence of certain data felds) was the most critical factor, followed by a belief in the recommender's ability to create accurate visualizations. Our fndings suggest a general indiference towards the provenance of recommendations, and point to idiosyncratic def-initions of visualization quality and trustworthiness that may not be captured by simple measures. We suggest that recommendation systems should be tailored to the information-foraging strategies of specifc users.

Cite

CITATION STYLE

APA

Zehrung, R., Singhal, A., Correll, M., & Battle, L. (2021). Vis ex machina: An analysis of trust in human versus algorithmically generated visualization recommendations. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445195

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free