The mosaic benchmarking framework: Development and execution of custom cloud benchmarks

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

A natural consequence of the pay-per-use business model of Cloud Computing is that cloud users need to evaluate and to compare different cloud providers in order to choose the best offerings in terms of trade-off between performance and cost. But at the state of the art, in cloud environments no real grants are offered by providers about the quality of the resources offered and no clear ways exists to compare two different offerings. Moreover, the high elasticity of cloud resources (virtual machines can be added or removed in few minutes) makes the evaluation of such systems a hard task. In this paper we propose to build ad-hoc benchmark applications, whose behavior is strictly related to user needs and which can be used to compare different providers. The proposal is based on the use of the mOSAIC framework, which offers a deployable platform and an API for building provider-independent applications. Due to such independence, we are able to compare directly multiple cloud offers. The paper details the proposed approach and the framework architecture implemented in order to apply it. Simple case studies illustrate how the framework works in practice. Moreover the paper presents a detailed analysis of the state of the art and of the problem of benchmarking in cloud environment. © 2013 SCPE.

Cite

CITATION STYLE

APA

Aversano, G., Rak, M., & Villano, U. (2013). The mosaic benchmarking framework: Development and execution of custom cloud benchmarks. Scalable Computing, 14(1), 33–45. https://doi.org/10.12694/scpe.v14i1.825

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free