Local sensitive low rank matrix approximation via nonconvex optimization

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The problem of matrix approximation appears ubiquitously in recommendation systems, computer vision and text mining. The prevailing assumption is that the partially observed matrix has a low-rank or can be well approximated by a low-rank matrix. However, this assumption is strictly that the partially observed matrix is globally low rank. In this paper, we propose a local sensitive formulation of matrix approximation which relaxes the global low-rank assumption, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We solve the problem by nonconvex optimization which exhibits superior performance of low rank matrix estimation when compared with convex relaxation. Our experiments show improvements in prediction accuracy over classical approaches for recommendation tasks.

Cite

CITATION STYLE

APA

Li, C. Y., Bao, W., Li, Z., Zhang, Y., Jiang, Y. L., & Yuan, C. A. (2017). Local sensitive low rank matrix approximation via nonconvex optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10363 LNAI, pp. 771–781). Springer Verlag. https://doi.org/10.1007/978-3-319-63315-2_67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free