The key issue in classifier fusion is diversity of the component models. In order to obtain the most diverse candidate models we generate a large number of classifiers and divide the set into K disjoint subsets. Classifiers with similar outputs are in the same cluster and classifiers with different predicted class labels are assigned to different clusters. In the next step one member of each cluster is selected, e.g. the one that exhibits the minimum average distance from the cluster center. Finally the selected classifiers are combined using majority voting. Results from several experiments have shown that the candidate classifiers are diverse and their fusion improves classification accuracy.
CITATION STYLE
Gatnar, E. (2007). Cluster and select approach to classifier fusion. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 59–66). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-540-70981-7_7
Mendeley helps you to discover research relevant for your work.