Super-scalable algorithms for computing on 100,000 processors

42Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the next five years, the number of processors in high-end systems for scientific computing is expected to rise to tens and even hundreds of thousands. For example, the IBM BlueGene/L can have up to 128,000 processors and the delivery of the first system is scheduled for 2005. Existing deficiencies in scalability and fault-tolerance of scientific applications need to be addressed soon. If the number of processors grows by a magnitude and efficiency drops by a magnitude, the overall effective computing performance stays the same. Furthermore, the mean time to interrupt of high-end computer systems decreases with scale and complexity. In a 100,000-processor system, failures may occur every couple of minutes and traditional checkpointing may no longer be feasible. With this paper, we summarize our recent research in super-scalable algorithms for computing on 100,000 processors. We introduce the algorithm properties of scale invariance and natural fault tolerance, and discuss how they can be applied to two different classes of algorithms. We also describe a super-scalable diskless check-pointing algorithm for problems that can't be transformed into a super-scalable variant, or where other solutions are more efficient. Finally, a 100,000-processor simulator is presented as a platform for testing and experimentation. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Engelmann, C., & Geist, A. (2005). Super-scalable algorithms for computing on 100,000 processors. In Lecture Notes in Computer Science (Vol. 3514, pp. 313–321). Springer Verlag. https://doi.org/10.1007/11428831_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free