An empirical study on language model adaptation using a metric of domain similarity

4Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an empirical study on four techniques of language model adaptation, including a maximum a posteriori (MAP) method and three discriminative training models, in the application of Japanese Kana-Kanji conversion. We compare the performance of these methods from various angles by adapting the baseline model to four adaptation domains. In particular, we attempt to interpret the results given in terms of the character error rate (CER) by correlating them with the characteristics of the adaptation domain measured using the information-theoretic notion of cross entropy. We show that such a metric correlates well with the CER performance of the adaptation methods, and also show that the discriminative methods are not only superior to a MAP-based method in terms of achieving larger CER reduction, but are also more robust against the similarity of background and adaptation domains. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Yuan, W., Gao, J., & Suzuki, H. (2005). An empirical study on language model adaptation using a metric of domain similarity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3651 LNAI, pp. 957–968). https://doi.org/10.1007/11562214_83

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free