Accuracy and specificity trade-off in k-nearest neighbors classification

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-NN rule is a simple, flexible and widely used nonparametric decision method, also connected to many problems in image classification and retrieval such as annotation and content-based search. As the number of classes increases and finer classification is considered (e.g. specific dog breed), high accuracy is often not possible in such challenging conditions, resulting in a system that will often suggest a wrong label. However, predicting a broader concept (e.g. dog) is much more reliable, and still useful in practice. Thus, sacrificing certain specificity for a more secure prediction is often desirable. This problem has been recently posed in terms of accuracy-specificity trade-off. In this paper we study the accuracy-specificity trade-off in k-NN classification, evaluating the impact of related techniques (posterior probability estimation and metric learning). Experimental results show that a proper combination of k-NN and metric learning can be very effective and obtain good performance.

Cite

CITATION STYLE

APA

Herranz, L., & Jiang, S. (2015). Accuracy and specificity trade-off in k-nearest neighbors classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9004, pp. 133–146). Springer Verlag. https://doi.org/10.1007/978-3-319-16808-1_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free