On why discretization works for naive-bayes classifiers

55Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate why discretization can be effective in naive- Bayes learning. We prove a theorem that identifies particular conditions under which discretization will result in naive-Bayes classifiers delivering the same probability estimates as would be obtained if the correct probability density functions were employed.We discuss the factors that might affect naive-Bayes classification error under discretization. We suggest that the use of different discretization techniques can affect the classification bias and variance of the generated classifiers. We argue that by properly managing discretization bias and variance, we can effectively reduce naive-Bayes classification error.

Cite

CITATION STYLE

APA

Yang, Y., & Webb, G. I. (2003). On why discretization works for naive-bayes classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2903, pp. 440–452). Springer Verlag. https://doi.org/10.1007/978-3-540-24581-0_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free