Human-designed sub-structures are required by most of the syntax-based machine translation evaluation metrics. In this paper, we propose a novel evaluation metric based on dependency parsing model, which does not need this human involvement. Experimental results show that the new single metric gets better correlation than METEOR on system level and is comparable with it on sentence level. To introduce more information, we combine the new metric with many other metrics. The combined metric obtains state-of-theart performance on both system level evaluation and sentence level evaluation on WMT 2014.
CITATION STYLE
Yu, H., Ma, Q., Wu, X., & Liu, Q. (2015). Casict-dcu participation in wmt2015 metrics task. In 10th Workshop on Statistical Machine Translation, WMT 2015 at the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Proceedings (pp. 417–421). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-3053
Mendeley helps you to discover research relevant for your work.