The misinformation problem affects the development of the society. Misleading content and unreliable information overwhelm social networks and media. In this context, the use of data visualizations to support news and stories is increasing. The use of misleading visualizations both intentionally or accidentally influence in the audience perceptions, which usually are not visualization and domain experts. Several factors influence o accurately tag a visualization as confusing or misleading. In this paper, we present a machine learning approach to detect if an information visualization can be potentially confusing and misunderstood based on the analytic task it tries to support. This approach is supported by fine-grained features identified through domain engineering and meta modelling on the information visualization and dashboards domain. We automatically generated visualizations from a tri-variate dataset through the software product line paradigm and manually labelled them to obtain a training dataset. The results support the viability of the proposal as a tool to support journalists, audience and society in general, not only to detect confusing visualizations, but also to select the visualization that supports a previous defined task according to the data domain.
CITATION STYLE
Vázquez-Ingelmo, A., García-Holgado, A., García-Peñalvo, F. J., & Therón, R. (2023). Proof-of-concept of an information visualization classification approach based on their fine-grained features. Expert Systems, 40(1). https://doi.org/10.1111/exsy.12872
Mendeley helps you to discover research relevant for your work.