N-Gram posterior probabilities for statistical machine translation

52Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.

Abstract

Word posterior probabilities are a common approach for confidence estimation in automatic speech recognition and machine translation. We will generalize this idea and introduce n-gram posterior probabilities and show how these can be used to improve translation quality. Additionally, we will introduce a sentence length model based on posterior probabilities. We will show significant improvements on the Chinese-English NIST task. The absolute improvements of the BLEU score is between 1.1% and 1.6%.

Cite

CITATION STYLE

APA

Zens, R., & Ney, H. (2006). N-Gram posterior probabilities for statistical machine translation. In HLT-NAACL 2006 - Statistical Machine Translation, Proceedings of the Workshop (pp. 72–77). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1654650.1654661

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free