Non-monotonic feature selection for regression

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Feature selection is an important research problem in machine learning and data mining. It is usually constrained by the budget of the feature subset size in practical applications. When the budget changes, the ranks of features in the selected feature subsets may also change due to nonlinear cost functions for acquisition of features. This property is called non-monotonic feature selection. In this paper, we focus on non-monotonic selection of features for regression tasks and approximate the original combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem and show the performance guarantee for the derived solution when compared to the global optimal solution for the combinatorial optimization problem. We conduct detailed experiments to demonstrate the effectiveness of the proposed method. The empirical results indicate the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.

Cite

CITATION STYLE

APA

Yang, H., Xu, Z., King, I., & Xu, Z. (2014). Non-monotonic feature selection for regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8835, pp. 44–51). Springer Verlag. https://doi.org/10.1007/978-3-319-12640-1_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free