A price we pay for inexact dimensionality reduction

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In biometrical and biomedical pattern classification tasks one faces high dimensional data. Feature selection or feature extraction is necessary. Accuracy of both procedures depends on the data size. An increase in classification error caused by employment of sample based K-class linear discriminant analysis for dimensionality reduction was considered both analytically and by simulations. We derived analytical expression for expected classification error by applying statistical analysis. It was shown theoretically that with an increase in the sample size, classification error of (K-1)-dimensional data decreases at first, however, later it starts increasing. The maximum is reached when the size of K class training sets, n, approaches dimensionality, p. When n > p, classification error decreases permanently. The peaking effect for real world biomedical and biometric data sets is demonstrated. We show that regularisation of the within-class scattering can reduce or even extinguish the peaking effect.

Cite

CITATION STYLE

APA

Raudys, S., Valaitis, V., Pabarskaite, Z., & Biziuleviciene, G. (2015). A price we pay for inexact dimensionality reduction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9044, pp. 289–300). Springer Verlag. https://doi.org/10.1007/978-3-319-16480-9_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free