A fuzzier approach to machine translation evaluation: A pilot study on post-editing productivity and automated metrics in commercial settings

10Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.

Abstract

Machine Translation (MT) quality is typically assessed using automatic evaluation metrics such as BLEU and TER. Despite being generally used in the industry for evaluating the usefulness of Translation Memory (TM) matches based on text similarity, fuzzy match values are not as widely used for this purpose in MT evaluation. We designed an experiment to test if this fuzzy score applied to MT output stands up against traditional methods of MT evaluation. The results obtained seem to confirm that this metric performs at least as well as traditional methods for MT evaluation.

Cite

CITATION STYLE

APA

Escartín, C. P., & Arcedillo, M. (2015). A fuzzier approach to machine translation evaluation: A pilot study on post-editing productivity and automated metrics in commercial settings. In ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Proceedings of the 4th Workshop on Hybrid Approaches to Translation, HyTra 2015 (pp. 40–45). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-4107

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free