Low-rank feature reduction and sample selection for multi-output regression

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are always varieties of inherent relational structures in the observations, which is crucial to perform multi-output regression task for high-dimensional data. Therefore, this paper proposes a new multi-output regression method, simultaneously taking into account three kinds of relational structures, i.e., the relationships between output and output, feature and output, sample and sample. Specially, the paper seeks the correlation of output variables by using a low-rank constraint, finds the correlation between features and outputs by imposing an ℓ2,1- norm regularization on coefficient matrix to conduct feature selection, and discovers the correlation of samples by designing the ℓ2,1-norm on the loss function to conduct sample selection. Furthermore, an effective iterative optimization algorithm is proposed to settle the convex objective function but not smooth problem. Finally, experimental results on many real datasets showed the proposed method outperforms all comparison algorithms in aspect of aCC and aRMSE.

Cite

CITATION STYLE

APA

Zhang, S., Yang, L., Li, Y., Luo, Y., & Zhu, X. (2016). Low-rank feature reduction and sample selection for multi-output regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10086 LNAI, pp. 126–141). Springer Verlag. https://doi.org/10.1007/978-3-319-49586-6_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free