Hume: Human UCCA-based evaluation of machine translation

25Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

Abstract

Human evaluation of machine translation normally uses sentence-level measures such as relative ranking or adequacy scales. However, these provide no insight into possible errors, and do not scale well with sentence length. We argue for a semantics-based evaluation, which captures what meaning components are retained in the MT output, thus providing a more fine-grained analysis of translation quality, and enabling the construction and tuning of semantics-based MT. We present a novel human semantic evaluation measure, Human UCCA-based MT Evaluation (HUME), building on the UCCA semantic representation scheme. HUME covers a wider range of semantic phenomena than previous methods and does not rely on semantic annotation of the potentially garbled MT output. We experiment with four language pairs, demonstrating HUME's broad applicability, and report good inter-annotator agreement rates and correlation with human adequacy scores.

Cite

CITATION STYLE

APA

Birch, A., Abend, O., Bojar, O., & Haddow, B. (2016). Hume: Human UCCA-based evaluation of machine translation. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1264–1274). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free