K-NN boosting prototype learning for object classification

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image classification is a challenging task in computer vision. For example fully understanding real-world images may involve both scene and object recognition. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this chapter, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with local image descriptors. However, k-NN still features some major drawbacks, mainly due to the uniform voting among the nearest prototypes in the feature space. In this chapter, we propose a generalization of the classic k-NN rule in a supervised learning (boosting) framework. Namely, we redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. In order to induce this classifier, we propose a novel learning algorithm, MLNN (Multiclass Leveraged Nearest Neighbors), which gives a simple procedure for performing prototype selection very efficiently. We tested our method first on object classification using 12 categories of objects, then on scene recognition as well, using 15 real-world categories. Experiments show significant improvement over classic k-NN in terms of classification performances. © 2013 Springer Science+Business Media.

Cite

CITATION STYLE

APA

Piro, P., Barlaud, M., Nock, R., & Nielsen, F. (2013). K-NN boosting prototype learning for object classification. In Lecture Notes in Electrical Engineering (Vol. 158 LNEE, pp. 37–53). https://doi.org/10.1007/978-1-4614-3831-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free