More visualization systems are simplifying the data analysis pro-cess by automatically suggesting relevant visualizations. However, little work has been done to understand if users trust these auto-mated recommendations. In this paper, we present the results of a crowd-sourced study exploring preferences and perceived quality of recommendations that have been positioned as either human-curated or algorithmically generated. We observe that while partic-ipants initially prefer human recommenders, their actions suggest an indiference for recommendation source when evaluating visual-ization recommendations. The relevance of presented information (e.g., the presence of certain data felds) was the most critical factor, followed by a belief in the recommender's ability to create accurate visualizations. Our fndings suggest a general indiference towards the provenance of recommendations, and point to idiosyncratic def-initions of visualization quality and trustworthiness that may not be captured by simple measures. We suggest that recommendation systems should be tailored to the information-foraging strategies of specifc users.
CITATION STYLE
Zehrung, R., Singhal, A., Correll, M., & Battle, L. (2021). Vis ex machina: An analysis of trust in human versus algorithmically generated visualization recommendations. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445195
Mendeley helps you to discover research relevant for your work.