Lazy averaged one-dependence estimators

15Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Naive Bayes is a probability-based classification model based on the conditional independence assumption. In many real-world applications, however, this assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the accuracy of naive Bayes by weakening the conditional independence assumption. The most recent work is the Averaged One-Dependence Estimators (AODE) [15] that demonstrates good classification performance. In this paper, we propose a novel lazy learning algorithm Lazy Averaged One-Dependence Estimators, simply LAODE, by extending AODE. For a given test instance, LAODE firstly expands the training data by adding some copies (clones) of each training instance according to its similarity to the test instance, and then uses the expanded training data to build an AODE classifier to classify the test instance. We experimentally test our algorithm in Weka system [16], using the whole 36 UCI data sets [11] recommended by Weka [17], and compare it to naive Bayes [3], AODE [15], and LBR [19]. The experimental results show that LAODE significantly outperforms all the other algorithms used to compare. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Jiang, L., & Zhang, H. (2006). Lazy averaged one-dependence estimators. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4013 LNAI, pp. 515–525). Springer Verlag. https://doi.org/10.1007/11766247_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free