Building weighted classifier ensembles through classifiers pruning

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many theoretical or experimental studies has shown that ensemble learning is an effective technique to achieve better classification accuracy and stability than individual classifiers. In this paper, we propose a novel weighted classifier ensemble method through classifiers pruning with two stages. In the first stage, we use canonical correlation analysis (CCA) to model maximum correlation relationships between training data points and base classifiers. Based on such globally multi-linear projections, a sparse regression method is proposed to prune base classifiers so that each test data point will dynamically select a subset of classifiers to form a unique classifier ensemble, to decrease effects of noisy input data and incorrect classifiers in such a global view. In the second stage, the pruned classifiers are weighted locally by a fusion method, which utilizes the generalization ability of pruned classifiers among nearest neighbors of testing data points. By this way, each test data point can build a unique locally weighted classifier ensemble. Analysis of experimental results on several UCI data sets shows that the classification results of our method are better than other ensemble methods such as Random Forests, Majority Voting, AdaBoost and DREP.

Cite

CITATION STYLE

APA

Cai, C. W., Wornyo, D. K., Wang, L., & Shen, X. J. (2018). Building weighted classifier ensembles through classifiers pruning. In Communications in Computer and Information Science (Vol. 819, pp. 131–139). Springer Verlag. https://doi.org/10.1007/978-981-10-8530-7_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free