KRNN: k Rare-class Nearest Neighbour classification

69Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Imbalanced classification is a challenging problem. Re-sampling and cost-sensitive learning are global strategies for generality-oriented algorithms such as the decision tree, targeting inter-class imbalance. We research local strategies for the specificity-oriented learning algorithms like the k Nearest Neighbour (KNN) to address the within-class imbalance issue of positive data sparsity. We propose an algorithm k Rare-class Nearest Neighbour, or KRNN, by directly adjusting the induction bias of KNN. We propose to form dynamic query neighbourhoods, and to further adjust the positive posterior probability estimation to bias classification towards the rare class. We conducted extensive experiments on thirty real-world and artificial datasets to evaluate the performance of KRNN. Our experiments showed that KRNN significantly improved KNN for classification of the rare class, and often outperformed re-sampling and cost-sensitive learning strategies with generality-oriented base learners.

Cite

CITATION STYLE

APA

Zhang, X., Li, Y., Kotagiri, R., Wu, L., Tari, Z., & Cheriet, M. (2017). KRNN: k Rare-class Nearest Neighbour classification. Pattern Recognition, 62, 33–44. https://doi.org/10.1016/j.patcog.2016.08.023

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free