Highly scalable attribute selection for averaged one-dependence estimators

13Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Averaged One-Dependence Estimators (AODE) is a popular and effective approach to Bayesian learning. In this paper, a new attribute selection approach is proposed for AODE. It can search in a large model space, while it requires only a single extra pass through the training data, resulting in a computationally efficient two-pass learning algorithm. The experimental results indicate that the new technique significantly reduces AODE's bias at the cost of a modest increase in training time. Its low bias and computational efficiency make it an attractive algorithm for learning from big data. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Chen, S., Martinez, A. M., & Webb, G. I. (2014). Highly scalable attribute selection for averaged one-dependence estimators. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8444 LNAI, pp. 86–97). Springer Verlag. https://doi.org/10.1007/978-3-319-06605-9_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free