Geometric understanding for unsupervised subspace learning

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we address the unsupervised subspace learning from a geometric viewpoint. First, we formulate the subspace learning as an inverse problem on Grassmannian manifold by considering all subspaces as points on it. Then, to make the model computable, we parameterize the Grassmannian manifold by using an orbit of rotation group action on all standard subspaces, which are spanned by the orthonormal basis. Further, to improve the robustness, we introduce a low-rank regularizer which makes the dimension of subspace as low as possible. Thus, the subspace learning problem is transferred to a minimization problem with variables of rotation and dimension. Then, we adopt the alternately iterative strategy to optimize the variables, where a structure-preserving method, based on the geodesic structure of the rotation group, is designed to update the rotation. Finally, we compare the proposed approach with six state-of-the-art methods on two different kinds of real datasets. The experimental results validate that our proposed method outperforms all compared methods.

Cite

CITATION STYLE

APA

Ying, S., Cai, L., He, C., & Peng, Y. (2019). Geometric understanding for unsupervised subspace learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 4171–4177). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/579

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free