We study how the error of an ensemble regression estimator can be decomposed into two components: one accounting for the individual errors and the other accounting for the correlations within the ensemble. This is the well known Ambiguity decomposition; we show an alternative way to decompose the error, and show how both decompositions have been exploited in a learning scheme. Using a scaling parameter in the decomposition we can blend the gradient (and therefore the learning process) smoothly between two extremes, from concentrating on individual accuracies and ignoring diversity, up to a full non-linear optimization of all parameters, treating the ensemble as a single learning unit. We demonstrate how this also applies to ensembles using a soft combination of posterior probability estimates, so can be utilised for classifier ensembles. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Brown, G., Wyatt, J., & Sun, P. (2005). Between two extremes: Examining decompositions of the ensemble objective function. In Lecture Notes in Computer Science (Vol. 3541, pp. 296–305). Springer Verlag. https://doi.org/10.1007/11494683_30
Mendeley helps you to discover research relevant for your work.