Selecting relevant features for classifier optimization

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Feature selection is an important data pre-processing step that comes before applying a machine learning algorithm. It removes irrelevant and redundant attributes from the dataset with an aim of improving the algorithm performance. There exist feature selection methods which focus on discovering features that are most suitable. These methods include wrappers, a subroutine of the learning algorithm itself, and filters, which discover features according to heuristics, based on the data characteristics and not tied to a specific algorithm. This paper improves the filter approach by enabling it to select strongly relevant and weakly relevant features and gives room to the researcher to decide which of the weakly relevant features to include. This new approach brings clarity and understandability to the feature selection preprocessing step.

Cite

CITATION STYLE

APA

Mgala, M., & Mbogho, A. (2014). Selecting relevant features for classifier optimization. In Communications in Computer and Information Science (Vol. 488, pp. 211–222). Springer Verlag. https://doi.org/10.1007/978-3-319-13461-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free