A direct criterion minimization based fMLLR via gradient descend

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adaptation techniques are necessary in automatic speech recognizers to improve a recognition accuracy. Linear Transformation methods (MLLR or fMLLR) are the most favorite in the case of limited available data. The fMLLR is the feature-space transformation. This is the advantage with contrast to MLLR that transforms the entire acoustic model. The classical fMLLR estimation involves maximization of the likelihood criterion based on individual Gaussian components statistic. We proposed an approach which takes into account the overall likelihood of a HMM state. It estimates the transformation to optimize the ML criterion of HMM directly using gradient descent algorithm. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Vaněk, J., & Zajíc, Z. (2013). A direct criterion minimization based fMLLR via gradient descend. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8082 LNAI, pp. 52–59). https://doi.org/10.1007/978-3-642-40585-3_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free