A Comparative Study of Parallel Programming Frameworks for Distributed GPU Applications

4Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Parallel programming frameworks such as MPI, OpenSHMEM, Charm++ and Legion have been widely used in many scientific domains (from bioinformatics, to computational physics, chemistry, among others) to implement distributed applications. While they have the same purpose, these frameworks differ in terms of programmability, performance, and scalability under different applications and cluster types. Hence, it is important for programmers to select the programming framework that is best suited to the characteristics of their application types (i.e. its computation and communication patterns) and the hardware setup of the target high-performance computing cluster. In this work, we consider several popular parallel programming frameworks for distributed applications. We first analyze their memory model, execution model, synchronization model and GPU support. We then compare their programmability, performance, scalability, and load-balancing capability on homogeneous computing cluster equipped with GPUs.

Cite

CITATION STYLE

APA

Gu, R., & Becchi, M. (2019). A Comparative Study of Parallel Programming Frameworks for Distributed GPU Applications. In ACM International Conference on Computing Frontiers 2019, CF 2019 - Proceedings (pp. 268–273). Association for Computing Machinery, Inc. https://doi.org/10.1145/3310273.3323071

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free