Maximum penalized likelihood Kernel regression for fast adaptation

14Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a nonlinear generalization of the popular maximum-likelihood linear regression (MLLR) adaptation algorithm using kernel methods. The proposed method, called maximum penalized likelihood kernel regression adaptation (MPLKR), applies kernel regression with appropriate regularization to determine the affine model transform in a kernel-induced high-dimensional feature space. Although this is not the first attempt of applying kernel methods to conventional linear adaptation algorithms, unlike most of other kernelized adaptation methods such as kernel eigenvoice or kernel eigen-MLLR, MPLKR has the advantage that it is a convex optimization and its solution is always guaranteed to be globally optimal. In fact, the adapted Gaussian means can be obtained analytically by simply solving a system of linear equations. From the Bayesian perspective, MPLKR can also be considered as the kernel version of maximum a posteriori linear regression (MAPLR) adaptation. Supervised and unsupervised speaker adaptation using MPLKR were evaluated on the Resource Management and Wall Street Journal 5K tasks, respectively, achieving a word error rate reduction of 23.6% and 15.5% respectively over the speaker-independently model. © 2006 IEEE.

Cite

CITATION STYLE

APA

Mak, B. K. W., Lai, T. C., Tsang, I. W., & Kwok, J. T. Y. (2009). Maximum penalized likelihood Kernel regression for fast adaptation. IEEE Transactions on Audio, Speech and Language Processing, 17(7), 1372–1381. https://doi.org/10.1109/TASL.2009.2019920

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free