Abstract
In order to improve the classification performance of a single classification model, Multiple Classifier Systems (MCS) are used. One of the most common techniques utilizing multiple decision trees is the random forest, where diversity between base classifiers is obtained by bagging the training dataset. In this paper, we propose the algorithm that uses horizontal partitioning the learning set and uses decision trees as base models to obtain decision regions. In the proposed approach feature space is divided into disjoint subspace. Additionally, the location of the subspace centroids, as well as the size and location of decision regions, are used in order to determine the weights needed in the last process of creating MCS, i.e. in the integration phase. The proposed algorithm was evaluated employing multiple open-source benchmarking datasets, compared using accuracy and Matthews correlation coefficient performance measures with two existing MCS methods - random forest and majority voting. The statistical analysis confirms an improvement in recognition compared to the random forest. In addition, we proved that for infinitely dense space division proposed algorithm is equivalent to majority voting.
Author supplied keywords
Cite
CITATION STYLE
Biedrzycki, J., & Burduk, R. (2020). Weighted scoring in geometric space for decision tree ensemble. IEEE Access, 8, 82100–82107. https://doi.org/10.1109/ACCESS.2020.2990721
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.