Parallel SOR for solving the convection diffusion equation using GPUs with CUDA

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we study a parallel form of the SOR method for the numerical solution of the Convection Diffusion equation suitable for GPUs using CUDA. To exploit the parallelism offered by GPUs we consider the fine grain parallelism model. This is achieved by considering the local relaxation version of SOR. More specifically, we use SOR with red black ordering with two sets of parameters ω ij and ω′ ij. The parameter ω ij is associated with each red (i+j even) grid point (ij), whereas the parameter ω′ ij is associated with each black (i+j odd) grid point (ij). The use of a parameter for each grid point avoids the global communication required in the adaptive determination of the best value of ω and also increases the convergence rate of the SOR method [3]. We present our strategy and the results of our effort to exploit the computational capabilities of GPUs under the CUDA environment. Additionally, a program for the CPU was developed as a performance reference. Significant performance improvement was achieved with the three developed GPU kernel variations which proved to have different pros and cons. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Cotronis, Y., Konstantinidis, E., Louka, M. A., & Missirlis, N. M. (2012). Parallel SOR for solving the convection diffusion equation using GPUs with CUDA. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7484 LNCS, pp. 575–586). https://doi.org/10.1007/978-3-642-32820-6_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free