Massively parallel preconditioners for the sparse conjugate gradient method

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study the conjugate gradient method to solve large sparse linear systems with two ways of preconditioning: the polynomial and the ILU preconditionings. A parallel version is evaluated on the Connection Machine 2 (CM-2) with large sparse matrices. Results show that we must find a tradeoff between high performance (in terms of Mflops) and fast convergence. We first conclude that to find efficient methods on massively parallel computers, especially when irregular structures were used, parallelising usual algorithms is not always the most efficient way. Then, we introduce the new massively parallel hybrid polynomial-/ILUTmp(l, ε, d) preconditioning for distributed memory machines using a data parallel programming model.

Cite

CITATION STYLE

APA

Petiton, S., & Weill-Duflos, C. (1992). Massively parallel preconditioners for the sparse conjugate gradient method. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 634 LNCS, pp. 373–378). Springer Verlag. https://doi.org/10.1007/3-540-55895-0_433

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free