State of the art in statistical machine translation is currently represented by phrasebased models, which typically incorporate a large number of probabilities of phrase-pairs and word n-grams. In this work, we investigate data compression methods for efficiently encoding n-gram and phrase-pair probabilities, that are usually encoded in 32-bit floating point numbers. We measured the impact of compression on translation quality through a phrase-based decoder trained on two distinct tasks: the translation of European Parliament speeches from Spanish to English, and the translation of news agencies from Chinese to English. We show that with a very simple quantization scheme all probabilities can be encoded in just 4 bits with a relative loss in BLEU score on the two tasks by 1.0% and 1.6%, respectively.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Federico, M., & Bertoldi, N. (2006). How many bits are needed to store probabilities for phrase-based translation? In HLT-NAACL 2006 - Statistical Machine Translation, Proceedings of the Workshop (pp. 94–101). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1654650.1654664