Word posterior probabilities are a common approach for confidence estimation in automatic speech recognition and machine translation. We will generalize this idea and introduce n-gram posterior probabilities and show how these can be used to improve translation quality. Additionally, we will introduce a sentence length model based on posterior probabilities. We will show significant improvements on the Chinese-English NIST task. The absolute improvements of the BLEU score is between 1.1% and 1.6%.
CITATION STYLE
Zens, R., & Ney, H. (2006). N-Gram posterior probabilities for statistical machine translation. In HLT-NAACL 2006 - Statistical Machine Translation, Proceedings of the Workshop (pp. 72–77). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1654650.1654661
Mendeley helps you to discover research relevant for your work.