Batch QR Factorization on GPUs: Design, Optimization, and Tuning

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

QR factorization of dense matrices is a ubiquitous tool in high performance computing (HPC). From solving linear systems and least squares problems to eigenvalue problems, and singular value decompositions, the impact of a high performance QR factorization is fundamental to computer simulations and many applications. More importantly, the QR factorization on a batch of relatively small matrices has acquired a lot of attention in sparse direct solvers and low-rank approximations for Hierarchical matrices. To address this interest and demand, we developed and present a high performance batch QR factorization for Graphics Processing Units (GPUs). We present a multi-level blocking strategy that adjusts various algorithmic designs to the size of the input matrices. We also show that following the LAPACK QR design convention, while still useful, is significantly outperformed by unconventional code structures that increase data reuse. The performance results show multi-fold speedups against the state of the art libraries on the latest GPU architectures from both NVIDIA and AMD.

Cite

CITATION STYLE

APA

Abdelfattah, A., Tomov, S., & Dongarra, J. (2022). Batch QR Factorization on GPUs: Design, Optimization, and Tuning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13350 LNCS, pp. 60–74). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-08751-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free