On the parallel I/O optimality of linear algebra kernels: Near-optimal matrix factorizations

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Matrix factorizations are among the most important building blocks of scientific computing. However, state-of-The-Art libraries are not communication-optimal, underutilizing current parallel architectures. We present novel algorithms for Cholesky and LU factorizations that utilize an asymptotically communication-optimal 2.5D decomposition. We first establish a theoretical framework for deriving parallel I/O lower bounds for linear algebra kernels, and then utilize its insights to derive Cholesky and LU schedules, both communicating N3/(P √ M) elements per processor, where M is the local memory size. The empirical results match our theoretical analysis: our implementations communicate significantly less than Intel MKL, SLATE, and the asymptotically communication-optimal CANDMC and CAPITAL libraries. Our code outperforms these state-of-The-Art libraries in almost all tested scenarios, with matrix sizes ranging from 2,048 to 524,288 on up to 512 CPU nodes of the Piz Daint supercomputer, decreasing the time-To-solution by up to three times. Our code is ScaLAPACK-compatible and available as an open-source library.

Cite

CITATION STYLE

APA

Kwasniewski, G., Kabic, M., Ben-Nun, T., Ziogas, A. N., Saethre, J. E., Gaillard, A., … Hoefler, T. (2021). On the parallel I/O optimality of linear algebra kernels: Near-optimal matrix factorizations. In International Conference for High Performance Computing, Networking, Storage and Analysis, SC. IEEE Computer Society. https://doi.org/10.1145/3458817.3476167

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free