Unsupervised parameter estimation of non linear scaling for improved classification in the dissimilarity space

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The non-linear scaling of given dissimilarities, by raising them to a power in the (0,1) interval, is often useful to improve the classification performance in the corresponding dissimilarity space. The optimal value for the power can be found by a grid search across a leaveone- out cross validation of the classifier: a procedure that might become costly for large dissimilarity matrices, and is based on labels, not permitting to capture the global effect of such a scaling. Herein, we propose an entirely unsupervised criterion that, when optimized, leads to a suboptimal but often good enough value of the scaling power. The criterion is based on a trade-off between the dispersion of data in the dissimilarity space and the corresponding intrinsic dimensionality, such that the concentrating effects of the power transformation on both the space axes and the spatial distribution of the objects are rationed.

Cite

CITATION STYLE

APA

Orozco-Alzate, M., Duin, R. P. W., & Bicego, M. (2016). Unsupervised parameter estimation of non linear scaling for improved classification in the dissimilarity space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10029 LNCS, pp. 74–83). Springer Verlag. https://doi.org/10.1007/978-3-319-49055-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free