We evaluate the impact of programming language features on the performance of parallel applications on modern parallel architectures, particularly for the demanding case of sparse integer codes. We compare a number of programming languages (Pthreads, OpenMP, MPI, UPC) on both shared and distributed-memory architectures. We find that language features can make parallel programs easier to write, but cannot hide the underlying communication costs for the target parallel architecture. Powerful compiler analysis and optimization can help reduce software overhead, but features such as fine-grain remote accesses are inherently expensive on clusters. To avoid large reductions in performance, language features must avoid degrading the performance of local computations. © Springer-Verlag 2004.
CITATION STYLE
Berlin, K., Huan, J., Jacob, M., Kochhar, G., Prins, J., Pugh, B., … Tseng, C. W. (2004). Evaluating the impact of programming language features on the performance of parallel applications on cluster architectures. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2958, 194–208. https://doi.org/10.1007/978-3-540-24644-2_13
Mendeley helps you to discover research relevant for your work.