"Brute-force" solution of large-scale systems of equations in a MPI-PBLAS-ScaLAPACK environment

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Space-borne gravity field recovery requires the solution of large-scale linear systems of equations to estimate tens of thousands of unknown gravity field parameters from tens of millions of observations. Satellite gravity data can only be exploited efficiently by the adaption of HPC technologies. The extension of the GOCE (Gravity field and steady-state Ocean Circulation Explorer) mission, in particular, poses unprecedented computational challenges in geodesy. In continuation of our work presented in the annual report in 2010, we succeeded in the preparation of a distributed memory version of our program using the MPI, PBLAS and ScaLAPACK programming standards. The tailored implementation enhances the range of usable computer architectures to computers with less memory per node than the NEC SX-8 and SX-9 systems we used. We present implementation details and runtime results using the NEC SX systems as distributed memory systems. A comparison with our OpenMP version shows that the MPI implementation of our program brings forth a speedup of around 12% for large-scale problems. © Springer-Verlag Berlin Heidelberg 2012.

Cite

CITATION STYLE

APA

Roth, M., Baur, O., & Keller, W. (2012). “Brute-force” solution of large-scale systems of equations in a MPI-PBLAS-ScaLAPACK environment. In High Performance Computing in Science and Engineering 2011 - Transactions of the High Performance Computing Center, Stuttgart, HLRS 2011 (pp. 581–594). Springer Verlag. https://doi.org/10.1007/978-3-642-23869-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free