Compiler-controlled parallelism-independent scheduling for parallel and distributed system

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The objective of the parallelism-independent (PI) scheduling is minimization of the completion time of a parallel application for any number of processing elements in the computing system. We propose several parallelism-independent algorithms which are either applicable for distributed computing systems, i.e. systems of autonomous processors connected via communication links (in this case we provide explicit message communication scheduling) or for tightly coupled multiprocessor systems or architectures exploiting instruction level parallelism as well. The algorithms are hybrid but predominantly done at compile time in order to reduce the dynamic overhead and scheduling hardware. All the traditional static scheduling algorithms produce machine codes with fixed degree of parallelism which cannot be executed efficiently on computer systems with different degrees of parallelism. Our algorithms eliminate this problem closely related to the distribution of parallel programs.

Cite

CITATION STYLE

APA

Nikolova, K., You, S. P., & Sowa, M. (2002). Compiler-controlled parallelism-independent scheduling for parallel and distributed system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2367, pp. 484–493). Springer Verlag. https://doi.org/10.1007/3-540-48051-x_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free