How can small-scale parallelism best be exploited in the solution of nonstiff initial value problems? It is generally accepted that only modest gains in efficiency are possible, and it is often the case that "fast" parallel algorithms have quite crude error control and stepsize selection components. In this paper we consider the possibility of using parallelism to improve reliability and functionality rather than efficiency. We present an algorithm that can be used with any explicit Runge-Kutta formula. The basic idea is to take several smaller substeps in parallel with the main step. The substeps provide an interpolation facility that is essentially free, and the error control strategy can then be based on a defect (residual) sample. If the number of processors exceeds (p - 1)/2, where p is the order of the Runge-Kutta formula, then the interpolant and the error control scheme satisfy very strong reliability conditions. Further, for a given order p, the asymptotically optimal values for the substep lengths are independent of the problem and formula and hence can be computed a priori. Theoretical comparisons between the parallel algorithm and optimal sequential algorithms at various orders are given. We also report on numerical tests of the reliability and efficiency of the new algorithm, and give some parallel timing statistics from a 4-processor machine. © 1991 BIT Foundations.
CITATION STYLE
Enright, W. H., & Higham, D. J. (1991). Parallel defect control. BIT, 31(4), 647–663. https://doi.org/10.1007/BF01933179
Mendeley helps you to discover research relevant for your work.