Efficient semi-supervised feature selection with noise insensitive trace ratio criterion

Citations of this article
Mendeley users who have this article in their library.
Get full text


Feature selection is an effective method to deal with high-dimensional data. While in many applications such as multimedia and web mining, the data are often high-dimensional and very large scale, but the labeled data are often very limited. On these kind of applications, it is important that the feature selection algorithm is efficient and can explore labeled data and unlabeled data simultaneously. In this paper, we target on this problem and propose an efficient semi-supervised feature selection algorithm to select relevant features using both labeled and unlabeled data. First, we analyze a popular trace ratio criterion in the dimensionality reduction, and point out that the trace ratio criterion tends to select features with very small variance. To solve this problem, we propose a noise insensitive trace ratio criterion for feature selection with a re-scale preprocessing. Interestingly, the feature selection with the noise insensitive trace ratio criterion can be much more efficiently solved. Based on the noise insensitive trace ratio criterion, we propose a new semi-supervised feature selection algorithm. The algorithm fully explores the distribution of the labeled and unlabeled data with a special label propagation method. Experimental results verify the effectiveness of the proposed algorithm, and show improvement over traditional supervised feature selection algorithms. © 2012 Elsevier B.V.




Liu, Y., Nie, F., Wu, J., & Chen, L. (2013). Efficient semi-supervised feature selection with noise insensitive trace ratio criterion. Neurocomputing, 105, 12–18. https://doi.org/10.1016/j.neucom.2012.05.031

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free