Survey of improving Naive Bayes for classification

96Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The attribute conditional independence assumption of naive Bayes essentially ignores attribute dependencies and is often violated. On the other hand, although a Bayesian network can represent arbitrary attribute dependencies, learning an optimal Bayesian network classifier from data is intractable. Thus, learning improved naive Bayes has attracted much attention from researchers and presented many effective and efficient improved algorithms. In this paper, we review some of these improved algorithms and single out four main improved approaches: 1) Feature selection; 2) Structure extension; 3) Local learning; 4) Data expansion. We experimentally tested these approaches using the whole 36 UCI data sets selected by Weka, and compared them to naive Bayes. The experimental results show that all these approaches are effective. In the end, we discuss some main directions for future research on Bayesian network classifiers. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Jiang, L., Wang, D., Cai, Z., & Yan, X. (2007). Survey of improving Naive Bayes for classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4632 LNAI, pp. 134–145). Springer Verlag. https://doi.org/10.1007/978-3-540-73871-8_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free