Cost-function complexity matters: When does parallel dynamic programming pay off for join-order optimization

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The execution time of queries can vary by several orders of magnitude depending on the join order. Hence, an efficient query execution can be ensured by determining optimal join orders. Dynamic programming determines optimal join orders efficiently. Unfortunately, the runtime of dynamic programming depends on the characteristics of the query, limiting the applicability to simple optimization problems. To extend the applicability, different parallelization strategies were proposed. Although existing parallelization strategies showed benefits for complex cost functions, the effects of the cost-function complexity was not evaluated. Therefore, in this paper, we compare different sequential and parallel dynamic programming variants with respect to different query characteristics and cost-function complexities. We show that the parallelization of a parallel dynamic programming variant is most often only useful for complex cost functions. For simple cost functions, we show that most often sequential variants are superior to their parallel counterparts.

Cite

CITATION STYLE

APA

Meister, A., & Saake, G. (2017). Cost-function complexity matters: When does parallel dynamic programming pay off for join-order optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10509 LNCS, pp. 297–310). Springer Verlag. https://doi.org/10.1007/978-3-319-66917-5_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free