We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form A = D + USVH, where D is sparse and USV H has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing k sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of O(k) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to 13x speedups on petascale machines. © 2014 Springer-Verlag.
CITATION STYLE
Knight, N., Carson, E., & Demmel, J. (2014). Exploiting data sparsity in parallel matrix powers computations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8384 LNCS, pp. 15–25). Springer Verlag. https://doi.org/10.1007/978-3-642-55224-3_2
Mendeley helps you to discover research relevant for your work.