We present a novel method for evaluating the output of Machine Translation (MT), based on comparing the dependency structures of the translation and reference rather than their surface string forms. Our method uses a treebank-based, wide-coverage, probabilistic Lexical-Functional Grammar (LFG) parser to produce a set of structural dependencies for each translation-reference sentence pair, and then calculates the precision and recall for these dependencies. Our dependency-based evaluation, in contrast to most popular string-based evaluation metrics, will not unfairly penalize perfectly valid syntactic variations in the translation. In addition to allowing for legitimate syntactic differences, we use paraphrases in the evaluation process to account for lexical variation. In comparison with other metrics on 16,800 sentences of Chinese-English newswire text, our method reaches high correlation with human scores. An experiment with two translations of 4,000 sentences from Spanish-English Europarl shows that, in contrast to most other metrics, our method does not display a high bias towards statistical models of translation.
CITATION STYLE
Owczarzak, K., van Genabith, J., & Way, A. (2007). Dependency-based automatic evaluation for machine translation. In Proceedings of NAACL-HLT 2007/AMTA Workshop on Syntax and Structure in Statistical Translation, SSST 2007 (pp. 80–87). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1626281.1626292
Mendeley helps you to discover research relevant for your work.