Feature deforming for improved similarity-based learning

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of similarity-based classifiers, such as K-NN, depends highly on the input space representation, both regarding feature relevence and feature interdependence. Feature weighting is a known technique aiming at improving performance by adjusting the importance of each feature at the classification decision. In this paper, we propose a non-linear feature transform for continuous features, which we call feade. The transform is applied prior to classification providing a new set of features, each one resulting by deforming in a local base the original feature according to a generalised mutual information metric for different regions of the feature value range. The algorithm is particularly efficient because it requires linear complexity in respect to the dimensions and the sample and does not need other classifier pre-training. Evaluation on real datasets shows an improvement in the performance of the K-NN classifier.

Cite

CITATION STYLE

APA

Petridis, S., & Perantonis, S. J. (2004). Feature deforming for improved similarity-based learning. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3025, pp. 201–209). Springer Verlag. https://doi.org/10.1007/978-3-540-24674-9_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free