Using SIMD registers and instructions to enable instruction-level parallelism in sorting algorithms

30Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most contemporary processors offer some version of Single Instruction Multiple Data (SIMD) machinery - vector registers and instructions to manipulate data stored in such registers. The central idea of this paper is to use these SIMD resources to improve the performance of the tail of recursive sorting algorithms. When the number of elements to be sorted reaches a set threshold, data is loaded into the vector registers, manipulated in-register, and the result stored back to memory. Three implementations of sorting with two different SIMD machineries - x86-64' SSE2 and G5's AltiVec - demonstrate that this idea delivers significant speed improvements. The improvements provided are orthogonal to the gains obtained through empirical search for a suitable sorting algorithm [11]. When integrated with the Dynamically Tuned Sorting Library (DTSL) this new code generation strategy reduces the time spent by DTSL up to 22% for moderately-sized arrays, with greater relative reductions for small arrays. Wall-clock performance of d-heaps is improved by up to 39% using a similar technique. Copyright 2007 ACM.

Cite

CITATION STYLE

APA

Furtak, T., Amaral, J. N., & Niewiadomski, R. (2007). Using SIMD registers and instructions to enable instruction-level parallelism in sorting algorithms. In Annual ACM Symposium on Parallelism in Algorithms and Architectures (pp. 348–357). https://doi.org/10.1145/1248377.1248436

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free