Empirical bounds on error differences when using Naive Bayes

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Here we revisit the Naïve Bayes Classifier (NB). A problem from veterinary medicine with assumed independent features led us to look once again at this model. The effectiveness of NB despite violation of the independence assumption is still open for discussion. In this study we try to develop a bound relating dependency level of features and the classification error of Naïve Bayes. As dependency between more than two features is difficult to define and express analytically, we consider a simple two class two feature example problem. Using simulations we established empirical bounds measured by Yules Q-statistic between calculable error and error related to the true distribution. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Hoare, Z. (2005). Empirical bounds on error differences when using Naive Bayes. In Lecture Notes in Computer Science (Vol. 3686, pp. 28–34). Springer Verlag. https://doi.org/10.1007/11551188_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free