Training an ensemble of neural networks is an interesting way to build a Multi-net System. One of the key factors to design an ensemble is how to combine the networks to give a single output. Although there are some important methods to build ensembles, Boosting is one of the most important ones. Most of methods based on Boosting use an specific combiner (Boosting Combiner). Although the Boosting combiner provides good results on boosting ensembles, the results of previouses papers show that the simple combiner Output Average can work better than the Boosting combiner. In this paper, we study the performance of sixteen different combination methods for ensembles previously trained with Adaptive Boosting and Average Boosting. The results show that the accuracy of the ensembles trained with these original boosting methods can be improved by using the appropriate alternative combiner. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Torres-Sospedra, J., Hernández-Espinosa, C., & Fernández-Redondo, M. (2008). Decision fusion on boosting ensembles. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5064 LNAI, pp. 157–167). https://doi.org/10.1007/978-3-540-69939-2_16
Mendeley helps you to discover research relevant for your work.