A comparison of ensemble creation techniques

26Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We experimentally evaluated bagging and six other randomization-based ensemble tree methods. Bagging uses randomization to create multiple training sets. Other approaches, such as Randomized C4.5, apply randomization in selecting a test at a given node of a tree. Then there are approaches, such as random forests and random subspaces, that apply randomization in the selection of attributes to be used in building the tree. On the other hand boosting incrementally builds classifiers by focusing on examples misclassified by existing classifiers. Experiments were performed on 34 publicly available data sets. While each of the other six approaches has some strengths, we find that none of them is consistently more accurate than standard bagging when tested for statistical significance. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Banfield, R. E., Hall, L. O., Bowyer, K. W., Bhadoria, D., Philip Kegelmeyer, W., & Eschrich, S. (2004). A comparison of ensemble creation techniques. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3077, 223–232. https://doi.org/10.1007/978-3-540-25966-4_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free