Scaling up the accuracy of Bayesian classifier based on frequent itemsets by m-estimate

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Frequent Itemsets Mining Classifier (FISC) is an improved Bayesian classifier which averaging all classifiers built by frequent itemsets. Considering that in learning Bayesian network classifier, estimating probabilities from a given set of training examples is crucial, and it has been proved that m-estimate can scale up the accuracy of many Bayesian classifiers. Thus, a natural question is whether FISC with m-estimate can perform even better. Response to this problem, in this paper, we aim to scale up the accuracy of FISC by m-estimate and propose new probability estimation formulas. The experimental results show that the Laplace estimate used in the original FISC performs not very well and our m-estimate can greatly scale up the accuracy, it even outperforms other outstanding Bayesian classifiers used to compare. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Duan, J., Lin, Z., Yi, W., & Lu, M. (2010). Scaling up the accuracy of Bayesian classifier based on frequent itemsets by m-estimate. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6319 LNAI, pp. 357–364). https://doi.org/10.1007/978-3-642-16530-6_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free