Survey of Nearest Neighbor Condensing Techniques

  • Amal M
  • Ahmed B
N/ACitations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

The nearest neighbor rule identifies the category of an unknown element according to its known nearest neighbors’ categories. This technique is efficient in many fields as event recognition, text categorization and object recognition. Its prime advantage is its simplicity, but its main inconvenience is its computing complexity for large training sets. This drawback was dealt by the researchers’ community as the problem of prototype selection. Trying to solve this problem several techniques presented as condensing techniques were proposed. Condensing algorithms try to determine a significantly reduced set of prototypes keeping the performance of the 1-NN rule on this set close to the one reached on the complete training set. In this paper we present a survey of some condensing KNN techniques which are CNN, RNN, FCNN, Drop1-5, DEL, IKNN, TRKNN and CBP. All these techniques can improve the efficiency in computation time. But these algorithms fail to prove the minimality of their resulting set. For this, one possibility is to hybridize them with other algorithms, called modern heuristics or metaheuristics, which, themselves, can improve the solution. The metaheuristics that have proven results in the selection of attributes are principally genetic algorithms and tabu search. We will also shed light in this paper on some recent techniques focusing on this template.

Cite

CITATION STYLE

APA

Amal, M.-A., & Ahmed, B.-A. (2011). Survey of Nearest Neighbor Condensing Techniques. International Journal of Advanced Computer Science and Applications, 2(11). https://doi.org/10.14569/ijacsa.2011.021110

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free