Unsupervised subspace learning via analysis dictionary learning

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ubiquitous digit devices, sensors and social networks bring tremendous high-dimensional data. The high-dimensionality leads to high time complexity, large storage burden, and degradation of the generalization ability. Subspace learning is one of the most effective ways to eliminate the curse of dimensionality by projecting the data to a low-dimensional feature subspace. In this paper, we proposed a novel unsupervised feature dimension reduction method via analysis dictionary learning. By learning an analysis dictionary, we project a sample to a low-dimensional space and the feature dimension is the number of atoms in the dictionary. The coding coefficient vector is used as the lowdimensional representation of data because it reflects the distribution on the synthesis dictionary atoms. Manifold regularization is imposed on the low-dimensional representation of data to keep the locality of the original feature space. Experiments on four datasets show that the proposed unsupervised dimension reduction model outperforms the state-of-theart methods.

Cite

CITATION STYLE

APA

Gao, K., Zhu, P., Hu, Q., & Zhang, C. (2016). Unsupervised subspace learning via analysis dictionary learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9967 LNCS, pp. 556–563). Springer Verlag. https://doi.org/10.1007/978-3-319-46654-5_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free