Learning from errors is a crucial aspect of improving expertise. Based on this notion, we discuss a robust statistical framework for analysing the impact of different error types on machine translation (MT) output quality. Our approach is based on linear mixed-effects models, which allow the analysis of error-annotated MT output taking into account the variability inherent to the specific experimental setting from which the empirical observations are drawn. Our experiments are carried out on different language pairs involving Chinese, Arabic and Russian as target languages. Interesting findings are reported, concerning the impact of different error types both at the level of human perception of quality and with respect to performance results measured with automatic metrics.
CITATION STYLE
Federico, M., Negri, M., Bentivogli, L., & Turchi, M. (2014). Assessing the impact of translation errors on machine translation quality with mixed-effects models. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1643–1653). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1172
Mendeley helps you to discover research relevant for your work.