Benchmarking exploratory OLAP

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Supporting interactive database exploration (IDE) is a problem that attracts lots of attention these days. Exploratory OLAP (On- Line Analytical Processing) is an important use case where tools support navigation and analysis of the most interesting data, using the best possible perspectives. While many approaches were proposed (like query recommendation, reuse, steering, personalization or unexpected data recommendation), a recurrent problem is how to assess the effectiveness of an exploratory OLAP approach. In this paper we propose a benchmark framework to do so, that relies on an extensible set of user-centric metrics that relate to the main dimensions of exploratory analysis. Namely, we describe how to model and simulate user activity, how to formalize our metrics and how to build exploratory tasks to properly evaluate an IDE system under test (SUT). To the best of our knowledge, this is the first proposal of such a benchmark. Experiments are two-fold: first we evaluate the benchmark protocol and metrics based on synthetic SUTs whose behavior is well known. Second, we concentrate on two different recent SUTs from IDE literature that are evaluated and compared with our benchmark. Finally, potential extensions to produce an industry-strength benchmark are listed in the conclusion.

Cite

CITATION STYLE

APA

Djedaini, M., Furtado, P., Labroche, N., Marcel, P., & Peralta, V. (2017). Benchmarking exploratory OLAP. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10080 LNCS, pp. 61–77). Springer Verlag. https://doi.org/10.1007/978-3-319-54334-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free