Di_pSystem: A parallel programming system for distributed memory architectures

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the architecture of a parallel programming system, the di_pSystem. Our target machine consists of clusters of multiprocessors interconnected with very fast networks. The system aims to provide a programming style close to the shared memory programming model. This is achieved by a software layer between the programmer and the operating system that supports the communication between individual computational agents and that dynamically balances the work-load in the system. This layer effectively hides much of the complexity of programming distributed architectures from the programmer while being competitive in performance. The low-level communication in the di_pSystem is implemented using MPI as a backbone. Initial results indicate that the system is close in performance to MPI, a fact that we attribute to its ability to dynamically balance the work-load in the computations.

Cite

CITATION STYLE

APA

Silva, F., Paulino, H., & Lopes, L. (1999). Di_pSystem: A parallel programming system for distributed memory architectures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1697, pp. 525–532). Springer Verlag. https://doi.org/10.1007/3-540-48158-3_65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free