Too big for MPI?

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In 2008 the National Leadership Computing Facility at Oak Ridge National Laboratory will have a petaflop system in place. This system will have tens of thousands of processors and petabytes of memory. This capability system will focus on application problems that are so hard that they require weeks on the full system to achieve breakthrough science in nanotechnology, medicine, and energy. With long running jobs on such huge computing systems the question arises: Are the computers and applications getting too big for MPI? This talk will address several reasons why the answer to this question may be yes. The first reason is the growing need for fault tolerance. This talk will review the recent efforts in adding fault tolerance to MPI and the broader need for holistic fault tolerance across petascale machines. The second reason is the potential need by these applications for new features or capabilities that don't exist in the MPI standard. A third reason is the emergence of new languages and programming paradigms on the horizon. This talk will discuss the DARPA High Productivity Computing Systems project and the new languages, Fortress, Chapel, Fortress, and X10 being developed by Cray, Sun, and IBM respectively. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Geist, A. (2006). Too big for MPI? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4192 LNCS, p. 1). Springer Verlag. https://doi.org/10.1007/11846802_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free