Empirical study of feature selection methods for high dimensional data

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Background/Objectives: Feature Selection is a process of selecting features that are relevant which is used in model construction by removing redundant, irrelevant and noisy data. A typical application of Text Mining is classification of messages and e-mails into spam and ham. Methods/Statistical Analysis: This article gives a comprehensive overview of the various Feature Selection methods for Text Mining. Various Filter methods like Pearson Correlation, Chi-square, Symmetrical Uncertainty and Mutual Information are applied to select the optimal set of features. Findings: Filter Feature Selection methods are used to classify Text data. Various Classification algorithms are applied using the optimal set of features obtained. The accuracy of classification algorithms are verified based on the chosen data set. Novelty/Improvements: A comparative study of various filter methods for Feature Selection and classification algorithms for performance evaluation is conceded in this research work.

Cite

CITATION STYLE

APA

DeepaLakshmi, S., & Velmurugan, T. (2016). Empirical study of feature selection methods for high dimensional data. Indian Journal of Science and Technology, 9(39). https://doi.org/10.17485/ijst/2016/v9i39/90599

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free