Message-optimal and latency-optimal termination detection algorithms for arbitrary topologies

12Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Detecting termination of a distributed computation is a fundamental problem in distributed systems. We present two optimal algorithms for detecting termination of a non-diffusing distributed computation for an arbitrary topology. Both algorithms are optimal in terms of message complexity and detection latency. The first termination detection algorithm has to be initiated along with the underlying computation. The message complexity of this algorithm is θ (N + M) and its detection latency is θ(D), where N is the number of processes in the system, M is the number of application messages exchanged by the underlying computation, and D is the diameter of the communication topology. The second termination detection algorithm can be initiated at any time after the underlying computation has started. The message complexity of this algorithm is θ(E+ M) and its detection latency is θ(D), where E is the number of channels in the communication topology. Keywords: termination detection, quiescence detection, optimal message complexity, optimal detection latency © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Mittal, N., Venkatesan, S., & Peri, S. (2004). Message-optimal and latency-optimal termination detection algorithms for arbitrary topologies. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3274, 290–304. https://doi.org/10.1007/978-3-540-30186-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free