Improving adaptive boosting with k-cross-fold validation

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As seen in the bibliography, Adaptive Boosting (Adaboost) is one of the most known methods to increase the performance of an ensemble of neural networks. We introduce a new method based on Adaboost where we have applied Cross-Validation to increase the diversity of the ensemble. We have used Cross-Validation over the whole learning set to generate an specific training set and validation set for each network of the committee. We have tested Adaboost and Crossboost with seven databases from the UCI repository. We have used the mean percentage of error reduction and the mean increase of performance to compare both methods, the results show that Crossboost performs better. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Torres-Sospedra, J., Hernández-Espinosa, C., & Fernández-Redondo, M. (2006). Improving adaptive boosting with k-cross-fold validation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4113 LNCS-I, pp. 397–402). Springer Verlag. https://doi.org/10.1007/11816157_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free