Solving large systems of linear equations on GPUs

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Graphical Processing Units (GPUs) have become more accessible peripheral devices with great computing capacity. Moreover, GPUs can be used not only to accelerate the graphics produced by a computer but also for general purpose computing. Many researchers use this technique on their personal workstations to accelerate the execution of their programs and have often encountered that the amount of memory available on GPU cards is typically smaller than the amount of memory available on the host computer. We are interested in exploring approaches to solve problems with this restriction. Our main contribution is to devise ways in which portions of the problem can be moved to the memory of the GPU to be solved using its multiprocessing capabilities. We implemented on a GPU the Jacobi iterative method to solve systems of linear equations and report the details from the results obtained, analyzing its performance and accuracy. Our code solves a system of linear equations large enough to exceed the card’s memory, but not the host memory. Significant speedups were observed, as the execution time taken to solve each system is faster than those obtained with Intel® MKL and Eigen, libraries designed to work on CPUs.

Cite

CITATION STYLE

APA

Llano-Ríos, T. F., Ocampo-García, J. D., Yepes-Ríos, J. S., Correa-Zabala, F. J., & Trefftz, C. (2018). Solving large systems of linear equations on GPUs. In Communications in Computer and Information Science (Vol. 885, pp. 39–54). Springer Verlag. https://doi.org/10.1007/978-3-319-98998-3_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free