Discriminatively learning selective averaged one-dependence estimators based on cross-entropy method

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Averaged One-Dependence Estimators [1], simply AODE, is a recently proposed algorithm which weakens the attribute independence assumption of naïve Bayes by averaging all the probability estimates of a collection of one-dependence estimators and demonstrates significantly high classification accuracy. In this paper, we study the selective AODE problem and proposed a Cross-Entropy based method to search the optimal subset over the whole one-dependence estimators. We experimentally test our algorithm in term of classification accuracy, using the 36 UCI data sets recommended by Weka, and compare it to C4.5[5], naïve Bayes, CL-TAN[6], HNB[7], AODE and LAODE[3]. The experiment results show that our method significantly outperforms all the other algorithms used to compare, and remarkably reduces the number of one-dependence estimators used compared to AODE. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Wang, Q., Zhou, C. H., & Zhao, B. H. (2007). Discriminatively learning selective averaged one-dependence estimators based on cross-entropy method. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4456 LNAI, pp. 903–912). Springer Verlag. https://doi.org/10.1007/978-3-540-74377-4_95

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free