Controlling distributed shared memory consistency from high level programming languages

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the keys for the success of parallel processing is the availability of high-level programming languages for on-the-shelf parallel architectures. Using explicit message passing models allows efficient executions. However, direct programming on these execution models does not give all benefits of high-level programming in terms of software productivity or portability. HPF avoids the need for explicit message passing but still suffers from low performance when the data accesses cannot be predicted with enough precision at compile-time. OpenMP is defined on a shared memory model. The use of a distributed shared memory (DSM) has been shown to facilitate high-level programming languages in terms of productivity and debugging. But the cost of managing the consistency of the distributed memories limits the performance. In this paper, we show that it is possible to control the consistency constraints on a DSM from compile-time analysis of the programs and so, to increase the efficiency of this execution model. © 2000 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Jégou, Y. (2000). Controlling distributed shared memory consistency from high level programming languages. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1800 LNCS, pp. 293–300). https://doi.org/10.1007/3-540-45591-4_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free