Trusting floating point benchmarks - are your benchmarks really data independent?

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Benchmarks are important tools for studying increasingly complex hardware architectures and software systems. Two seemingly common assumptions are that the execution time of floating point operations do not change much with different input values, and that the execution time of a benchmark does not vary much if the input and computed values do not influence any branches. These assumption do not always hold. There is significant overhead in handling denormalized floating point values (a representation automatically used by the CPU to represent values close to zero) on-chip on modern Intel hardware, even if the program can continue uninterrupted. We have observed that even a small fraction of denormal numbers in a textbook benchmark significantly increases the execution time of the benchmark, leading to the wrong conclusions about the relative efficiency of different hardware architectures and about scalability problems of a cluster benchmark. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Bjørndalen, J. M., & Anshus, O. J. (2007). Trusting floating point benchmarks - are your benchmarks really data independent? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4699 LNCS, pp. 178–188). Springer Verlag. https://doi.org/10.1007/978-3-540-75755-9_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free