Combining multiple k-nearest neighbor classifiers for text classification by reducts

27Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The basic k-nearest neighbor classifier works well in text classification. However, improving performance of the classifier is still attractive. Combining multiple classifiers is an effective technique for improving accuracy. There are many general combining algorithms, such as Bagging, or Boosting that significantly improve the classifier such as decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classifiers. In this paper we present a new approach to general multiple reducts based on rough sets theory, in which we apply multiple reducts to improve the performance of the k-nearest neighbor classifier. This paper describes the proposed technique and provides experimental results.

Cite

CITATION STYLE

APA

Bao, Y., & Ishii, N. (2002). Combining multiple k-nearest neighbor classifiers for text classification by reducts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2534, pp. 340–347). Springer Verlag. https://doi.org/10.1007/3-540-36182-0_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free