Efficient parallel algorithms can be made robust

25Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

The efficient parallel algorithms proposed for many fundamental problems, such as list ranking, computing preorder numberings and other functions on trees, or integer sorting, are very sensitive to processor failures. The requirement of efficiency (commonly formalized using Parallel-time x Processors as a cost measure) has led to the design of highly tuned PRAM algorithms which, given the additional constraint of simple processor failures, unfortunately become inefficient or even incorrect. We propose a new notion of robustness, that combines efficiency with fault tolerance. For the common case of fail-stop errors, we develop a general (and easy to implement) technique to make robust many efficient parallel algorithms, e.g., algorithms for all the problems listed above. More specifically, for any dynamic pattern of fail-stop errors with at least one surviving processor, our method increases the original algorithm cost by at most a multiplicative factor polylogarithmic in the input size.

Cite

CITATION STYLE

APA

Kanellakis, P. C., & Shvartsman, A. A. (1989). Efficient parallel algorithms can be made robust. In Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (pp. 211–221). Publ by ACM. https://doi.org/10.1145/72981.72996

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free