On Aggregation in Ensembles of Multilabel Classifiers

9Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far. In this paper, we introduce a formal framework of ensemble multilabel classification, in which we distinguish two principal approaches: “predict then combine” (PTC), where the ensemble members first make loss minimizing predictions which are subsequently combined, and “combine then predict” (CTP), which first aggregates information such as marginal label probabilities from the individual ensemble members, and then derives a prediction from this aggregation. While both approaches generalize voting techniques commonly used for multilabel ensembles, they allow to explicitly take the target performance measure into account. Therefore, concrete instantiations of CTP and PTC can be tailored to concrete loss functions. Experimentally, we show that standard voting techniques are indeed outperformed by suitable instantiations of CTP and PTC, and provide some evidence that CTP performs well for decomposable loss functions, whereas PTC is the better choice for non-decomposable losses.

Cite

CITATION STYLE

APA

Nguyen, V. L., Hüllermeier, E., Rapp, M., Loza Mencía, E., & Fürnkranz, J. (2020). On Aggregation in Ensembles of Multilabel Classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12323 LNAI, pp. 533–547). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61527-7_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free