Deep knn for medical image classification

17Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human-level diagnostic performance from intelligent systems often depends on large set of training data. However, the amount of available data for model training may be limited for part of diseases, which would cause the widely adopted deep learning models not generalizing well. One alternative simple approach to small class prediction is the traditional k-nearest neighbor (kNN). However, due to the non-parametric characteristics of kNN, it is difficult to combine the kNN classification into the learning of feature extractor. This paper proposes an end-to-end learning strategy to unify the kNN classification and the feature extraction procedure. The basic idea is to enforce that each training sample and its K nearest neighbors belong to the same class during learning the feature extractor. Experiments on multiple small-class and class-imbalanced medical image datasets showed that the proposed deep kNN outperforms both kNN and other strong classifiers.

Cite

CITATION STYLE

APA

Zhuang, J., Cai, J., Wang, R., Zhang, J., & Zheng, W. S. (2020). Deep knn for medical image classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12261 LNCS, pp. 127–136). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59710-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free