In constructing a classifier ensemble diversity is more important as the accuracy of its elements. To reach a diverse ensemble, one approach is to produce a pool of classifiers. Then we define a metric to evaluate the diversity value in a set of classifiers. We extract a subset of classifiers out of the pool in such a way that has a high diversity value. Usage of Bagging and Boosting as the sources of generators of diversity is another alternative. The third alternative is to partition classifiers and then select a classifier from each partition. Because of high similarity between classifiers of each partition, there is no need to let more than exactly one classifier from each of partition participate in the final ensemble. In this article, the performance of proposed framework is evaluated on some real datasets of UCI repository. Achieved results show effectiveness of the algorithm compare to the original bagging and boosting algorithms. © 2011 Springer-Verlag.
CITATION STYLE
Parvin, H., Minaei-Bidgoli, B., & Beigi, A. (2011). A new classifier ensembles framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6881 LNAI, pp. 110–119). https://doi.org/10.1007/978-3-642-23851-2_12
Mendeley helps you to discover research relevant for your work.