Evaluating interactive data systems: Survey and case studies

14Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interactive query interfaces have become a popular tool for ad hoc data analysis and exploration. Compared with traditional systems that are optimized for throughput or batched performance, these systems focus more on user-centric interactivity. This poses a new class of performance challenges to the backend, which are further exacerbated by the advent of new interaction modes (e.g., touch, gesture) and query interface paradigms (e.g., sliders, maps). There is, thus, a need to clearly articulate the evaluation space for interactive systems. In this paper, we extensively survey the literature to guide the development and evaluation of interactive data systems. We highlight unique characteristics of interactive workloads, discuss confounding factors when conducting user studies, and catalog popular metrics for evaluation. We further delineate certain behaviors not captured by these metrics and propose complementary ones to provide a complete picture of interactivity. We demonstrate how to analyze and employ user behavior for system enhancements through three case studies. Our survey and case studies motivate the need for behavior-driven evaluation and optimizations when building interactive interfaces.

Cite

CITATION STYLE

APA

Rahman, P., Jiang, L., & Nandi, A. (2020). Evaluating interactive data systems: Survey and case studies. In VLDB Journal (Vol. 29, pp. 119–146). Springer. https://doi.org/10.1007/s00778-019-00589-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free