We study the conjugate gradient method to solve large sparse linear systems with two ways of preconditioning: the polynomial and the ILU preconditionings. A parallel version is evaluated on the Connection Machine 2 (CM-2) with large sparse matrices. Results show that we must find a tradeoff between high performance (in terms of Mflops) and fast convergence. We first conclude that to find efficient methods on massively parallel computers, especially when irregular structures were used, parallelising usual algorithms is not always the most efficient way. Then, we introduce the new massively parallel hybrid polynomial-/ILUTmp(l, ε, d) preconditioning for distributed memory machines using a data parallel programming model.
CITATION STYLE
Petiton, S., & Weill-Duflos, C. (1992). Massively parallel preconditioners for the sparse conjugate gradient method. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 634 LNCS, pp. 373–378). Springer Verlag. https://doi.org/10.1007/3-540-55895-0_433
Mendeley helps you to discover research relevant for your work.