Many programming languages support either task parallelism or data parallelism, but few languages provide a uniform framework for writing applications that need both types of parallelism. We present a programming language and system that integrates task and data parallelism using shared objects. Shared objects may be stored on one processor or may be replicated. Objects may also be partitioned and distributed on several processors. Task parallelism is achieved by forking processes remotely and have them communicate and synchronize through objects. Data parallelism is achieved by executing operations on partitioned objects in parallel. Writing task- and data-parallel applications with shared objects has several advantages. Programmers use the objects as if they were stored in a memory common to all processors. On distributed-memory machines, if objects are remote, replicated, or partitioned, the system takes care of many low-level details such as data transfers and consistency semantics. In this article, we show how to write task- and data-parallel programs with our shared object model. We also describe a portable implementation of the model. To assess the performance of the system, we wrote several applications that use task and data parallelism and executed them on a collection of Pentium Pros connected by Myrinet. The performance of the applications is also discussed in this articles.
CITATION STYLE
Hassen, S. B., Bal, H. E., & Jacobs, C. J. H. (1998). Task- and data-parallel programming language based on shared objects. ACM Transactions on Programming Languages and Systems, 20(6), 1131–1170. https://doi.org/10.1145/295656.295658
Mendeley helps you to discover research relevant for your work.