Abstract
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodology and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.
Author supplied keywords
Cite
CITATION STYLE
Zhang, D., Wilke, J., Hendry, G., & Dechev, D. (2016). Validating the simulation of large-scale parallel applications using statistical characteristics. ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 1(1). https://doi.org/10.1145/2809778
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.