Morpho-syntactic information for automatic error analysis of statistical machine translation output

12Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evaluation of machine translation output is an important but difficult task. Over the last years, a variety of automatic evaluation measures have been studied, some of them like Word Error Rate (WER), Position Independent Word Error Rate (PER) and BLEU and NIST scores have become widely used tools for comparing different systems as well as for evaluating improvements within one system. However, these measures do not give any details about the nature of translation errors. Therefore some analysis of the generated output is needed in order to identify the main problems and to focus the research efforts. On the other hand, human evaluation is a time consuming and expensive task. In this paper, we investigate methods for using of morpho-syntactic information for automatic evaluation: standard error measures WER and PER are calculated on distinct word classes and forms in order to get a better idea about the nature of translation errors and possibilities for improvements.

Cite

CITATION STYLE

APA

Popovíc, M., Ney, H., De Gispert, A., Mariño, J. B., Gupta, D., Federico, M., … Banchs, R. (2006). Morpho-syntactic information for automatic error analysis of statistical machine translation output. In HLT-NAACL 2006 - Statistical Machine Translation, Proceedings of the Workshop (pp. 1–6). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1654650.1654652

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free