It is well known that ensembles of predictors produce better accuracy than a single predictor provided there is diversity in the ensemble. This diversity manifests itself as disagreement or ambiguity among the ensemble members. In this paper we focus on ensembles of classifiers based on different feature subsets and we present a process for producing such ensembles that emphasizes diversity (ambiguity) in the ensemble members. This emphasis on diversity produces ensembles with low generalization errors from ensemble members with comparatively high generalization error. We compare this with ensembles produced focusing only on the error of the ensemble members (without regard to overall diversity) and find that the ensembles based on ambiguity have lower generalization error. Further, we find that the ensemble members produced focusing on ambiguity have less features on average that those based on error only. We suggest that this indicates that these ensemble members are local learners.
CITATION STYLE
Zenobi, G., & Cunningham, P. (2001). Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2167, pp. 576–587). Springer Verlag. https://doi.org/10.1007/3-540-44795-4_49
Mendeley helps you to discover research relevant for your work.