Low-rank approximation of a matrix: Novel insights, new progress, and extensions

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Empirical performance of the celebrated algorithms for lowrank approximation of a matrix by means of random sampling has been consistently efficient in various studies with various sparse and structured multipliers, but so far formal support for this empirical observation has been missing. Our new insight into this subject enables us to provide such an elusive formal support. Furthermore, our approach promises significant acceleration of the known algorithms by means of sampling with more efficient sparse and structured multipliers. It should also lead to enhanced performance of other fundamental matrix algorithms. Our formal results and our initial numerical tests are in good accordance with each other, and we have already extended our progress to the acceleration of the Fast Multipole Method and the Conjugate Gradient algorithms.

Cite

CITATION STYLE

APA

Pan, V. Y., & Zhao, L. (2016). Low-rank approximation of a matrix: Novel insights, new progress, and extensions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9691, pp. 352–366). Springer Verlag. https://doi.org/10.1007/978-3-319-34171-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free