Performance evaluation of naive bayes classifier with and without filter based feature selection

12Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Customer Relationship Ma agement tends to analyze datasets to find insights about data which in turn helps to frame the business strategy for improvement of enterprises. Analyzing data in CRM requires high intensive models. Machine Learning (ML) algorithms help in analyzing such large dimensional datasets. In most real time datasets, the strong independence assumption of Naive Bayes (NB) between the attributes are violated and due to other various drawbacks in datasets like irrelevant data, partially irrelevant data and redundant data, it leads to poor performance of prediction. Feature selection is a preprocessing method applied, to enhance the predication of the NB model. Further, empirical experiments are conducted based on NB with Feature selection and NB without feature selection. In this paper, a empirical study of attribute selection is experimented for five dissimilar filter based feature selection such as Relief-F, Pearson correlation (PCC), Symmetrical Uncertainty (SU), Gain Ratio (GR) and Information Gain (IG).

Cite

CITATION STYLE

APA

Prabha, D., Siva Subramanian, R., Balakrishnan, S., & Karpagam, M. (2019). Performance evaluation of naive bayes classifier with and without filter based feature selection. International Journal of Innovative Technology and Exploring Engineering, 8(10), 2154–2158. https://doi.org/10.35940/ijitee.J9376.0881019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free