Efficient experiment selection in automated software performance evaluations

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of today's enterprise applications is influenced by a variety of parameters across different layers. Thus, evaluating the performance of such systems is a time and resource consuming process. The amount of possible parameter combinations and configurations requires many experiments in order to derive meaningful conclusions. Although many tools for automated performance testing are available, controlling experiments and analyzing results still requires large manual effort. In this paper, we apply statistical model inference techniques, namely Kriging and MARS, in order to adaptively select experiments. Our approach automatically selects and conducts experiments based on the accuracy observed for the models inferred from the currently available data. We validated the approach using an industrial ERP scenario. The results demonstrate that we can automatically infer a prediction model with a mean relative error of 1.6% using only 18% of the measurement points in the configuration space. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Westermann, D., Krebs, R., & Happe, J. (2011). Efficient experiment selection in automated software performance evaluations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6977 LNCS, pp. 325–339). https://doi.org/10.1007/978-3-642-24749-1_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free