Improving adaptive boosting with a relaxed equation to update the sampling distribution

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adaptive Boosting (Adaboost) is one of the most known methods to build an ensemble of neural networks. In this paper we briefly analyze and mix two of the most important variants of Adaboost, Averaged Boosting and Conservative Boosting, in order to build a robuster ensemble of neural networks. The mixed method called Averaged Conservative Boosting (ACB) applies the conservative equation used in Conserboost along with the averaged procedure used in Aveboost in order to update the sampling distribution. We have tested the methods with seven databases from the UCI repository. The results show that Averaged Conservative Boosting is the best performing method. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Torres-Sospedra, J., Hernández-Espinosa, C., & Fernández-Redondo, M. (2007). Improving adaptive boosting with a relaxed equation to update the sampling distribution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4507 LNCS, pp. 119–126). Springer Verlag. https://doi.org/10.1007/978-3-540-73007-1_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free