On the Correctness of Orphan Management Algorithms

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

In a distributed system, node failures, network delays, and other unpredictable occurences can result in orphan computations—subcomputations that continue to run but whose results are no longer needed. Several algorithms have been proposed to prevent such computations from seeing inconsistent states of the shared data. In this paper, two such orphan management algorithms are analyzed. The first is an algorithm implemented in the Argus distributed-computing system at MIT, and the second is an algorithm proposed at Carnegie-Mellon. The algorithms are described formally, and complete proofs of their correctness are given. The proofs show that the fundamental concepts underlying the two algorithms are very similar in that each can be regarded as an implementation of the same high-level algorithm. By exploiting properties of information flow within transaction management systems, the algorithms ensure that orphans only see states of the shared data that they could also see if they were not orphans. When the algorithms are used in combination with any correct concurrency control algorithm, they guarantee that all computations, orphan as well as nonorphan, see consistent states of the shared data. © 1992, ACM. All rights reserved.

Cite

CITATION STYLE

APA

Herlihy, M., Lynch, N., Merritt, M., & Weihl, W. (1992). On the Correctness of Orphan Management Algorithms. Journal of the ACM (JACM), 39(4), 881–930. https://doi.org/10.1145/146585.146616

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free