A comparative study of RPCL and MCE based discriminative training methods for LVCSR

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focus on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at state level. The LVCSR results show that both MCE and RPCL perform better than Maximum Likelihood Estimation (MLE), while RPCL has better discriminative and generative abilities than MCE. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Pang, Z., Wu, X., & Xu, L. (2012). A comparative study of RPCL and MCE based discriminative training methods for LVCSR. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7202 LNCS, pp. 27–34). https://doi.org/10.1007/978-3-642-31919-8_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free