Jointly sparse reconstructed regression learning

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Least squares regression and ridge regression are simple and effective methods for feature selection and classification and many methods based on them are proposed. However, most of these methods have small-class problem, which means that the number of the projection learned by these methods is limited by the number of class. In this paper, we propose a jointly sparse reconstructed regression (JSRR) to solve this problem. Moreover, JSRR uses L2,1-norm as the basic measurement so that it can enhance robustness to outliers and guarantee joint sparsity for discriminant feature selection. In addition, by integrating the property of robust feature selection (RFS) and principle component analysis (PCA), JSRR is able to obtain the projections that have minimum reconstructed error and strong discriminability for recognition task. We also propose an iterative algorithm to solve the optimization problem. A series of experiments are conducted to evaluate the performance of JSRR. Experimental results indicate that JSRR outperforms the classical RR and some state-of-the-art regression methods.

Cite

CITATION STYLE

APA

Mo, D., Lai, Z., & Kong, H. (2018). Jointly sparse reconstructed regression learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11258 LNCS, pp. 597–609). Springer Verlag. https://doi.org/10.1007/978-3-030-03338-5_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free