A classifier team is used in preference to a single classifier in the expectation it will be more accurate. Here we study the potential for improvement in classifier teams designed by the feature subspace method: the set of features is partitioned and each subset is used by one classifier in the team. All partitions of a set of 10 features into 3 subsets containing 〈4,4, 2〉 features and 〈4, 3, 3〉 features, are enumerated and nine combination schemes are applied on the three classifiers. We look at the distribution and the extremes of the improvement (or failure); the chances of the team outperforming the single best classifier if the feature space is partitioned at random; the relationship between the spread of the individual classifier accuracy and the team accuracy; and the combination schemes performance.
CITATION STYLE
Kuncheva, L. I., & Whitaker, C. J. (2001). Feature subsets for classifier combination: An enumerative experiment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2096, pp. 228–237). Springer Verlag. https://doi.org/10.1007/3-540-48219-9_23
Mendeley helps you to discover research relevant for your work.