Comparing linear feature space transformations for correlated features

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In automatic speech recognition, a common method to decorrelate features and to reduce feature space dimensionality is Linear Discriminant Analysis (LDA). In this paper, the performance of LDA has been compared with other linear feature space transformation schemes, as many alternative methods have been suggested and lead to higher recognition accuracy in some cases. Different approaches such as MLLT, HLDA, SHLDA, PCA, and combined schemes were implemented and compared. Experiments show that all methods lead to similar results. In addition, recent research has shown that the LDA algorithm is unreliable if the input features of LDA are strongly correlated. In this paper a stable solution to the correlated feature problem, consisting of a concatenation scheme with PCA and LDA, is proposed and verified. Finally, several transformation algorithms are evaluated on uncorrelated and strongly correlated features. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Vásquez, D., Gruhn, R., Brueckner, R., & Minker, W. (2008). Comparing linear feature space transformations for correlated features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5078 LNCS, pp. 176–187). https://doi.org/10.1007/978-3-540-69369-7_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free