On the performance of a new parallel algorithm for large-scale simulations of nonlinear partial differential equations

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A new parallel numerical algorithm based on generating suitable random trees has been developed for solving nonlinear parabolic partial differential equations. This algorithm is suited for current high performance supercomputers, showing a remarkable performance and arbitrary scalability. While classical techniques based on a deterministic domain decomposition exhibits strong limitations when increasing the size of the problem (mainly due to the intercommunication overhead), probabilistic methods allow us to exploit massively parallel architectures since the problem can be fully decoupled. Some examples have been run on a high performance computer, being scalability and performance carefully analyzed. Large-scale simulations confirmed that computational time decreases proportionally to the cube of the number of processors, whereas memory reduces quadratically. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Acebrón, J. A., Rodríguez-Rozas, Á., & Spigler, R. (2010). On the performance of a new parallel algorithm for large-scale simulations of nonlinear partial differential equations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6067 LNCS, pp. 41–50). https://doi.org/10.1007/978-3-642-14390-8_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free