Combining MF networks: A comparison among statistical methods and stacked generalization

13Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

The two key factors to design an ensemble of neural networks are how to train the individual networks and how to combine the different outputs to get a single output. In this paper we focus on the combination module. We have proposed two methods based on Stacked Generalization as the combination module of an ensemble of neural networks. In this paper we have performed a comparison among the two versions of Stacked Generalization and six statistical combination methods in order to get the best combination method. We have used the mean increase of performance and the mean percentage or error reduction for the comparison. The results show that the methods based on Stacked Generalization are better than classical combiners. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Torres-Sospedra, J., Hernández-Espinosa, C., & Fernández-Redondo, M. (2006). Combining MF networks: A comparison among statistical methods and stacked generalization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4087 LNAI, pp. 210–220). Springer Verlag. https://doi.org/10.1007/11829898_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free