Optimum subspace learning and error correction for tensors

N/ACitations
Citations of this article
35Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Confronted with the high-dimensional tensor-like visual data, we derive a method for the decomposition of an observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns. The optimal rank-(R 1,R2,...Rn ) tensor decomposition model that we propose in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns. Consequently, our method accounts for the implicit multi-factor structure of tensor-like visual data in an explicit and concise manner. In addition, the optimal tensor decomposition is formulated as a convex optimization through relaxation technique. We then develop a block coordinate descent (BCD) based algorithm to efficiently solve the problem. In experiments, we show several applications of our method in computer vision and the results are promising. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Li, Y., Yan, J., Zhou, Y., & Yang, J. (2010). Optimum subspace learning and error correction for tensors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6313 LNCS, pp. 790–803). Springer Verlag. https://doi.org/10.1007/978-3-642-15558-1_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free