An empirical investigation of the effort of creating reusable, component-based models for performance prediction

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once. © 2008 Springer.

Cite

CITATION STYLE

APA

Martens, A., Becker, S., Koziolek, H., & Reussner, R. (2008). An empirical investigation of the effort of creating reusable, component-based models for performance prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5282 LNCS, pp. 16–31). Springer Verlag. https://doi.org/10.1007/978-3-540-87891-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free