A Naïve automatic MT evaluation method without reference translations

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional automatic machine translation (MT) evaluation methods adopt the idea that calculates the similarity between machine translation output and human reference translations. However, in terms of the needs of many users, it is a key research issues to propose an evaluation method without references. As described in this paper, we propose a novel automatic MT evaluation method without human reference translations. Firstly, calculate average n-grams probability of source sentence with source language models, and similarly, calculate average n-grams probability of machine-translated sentence with target language models, finally, use the relative error of two average n-grams probabilities to mark machine-translated sentence. The experimental results show that our method can achieve high correlations with a few automatic MT evaluation metrics. The main contribution of this paper is that users can get MT evaluation reliability in the absence of reference translations, which greatly improving the utility of MT evaluation metrics. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Jiang, J., Xu, J., & Lin, Y. (2011). A Naïve automatic MT evaluation method without reference translations. In Advances in Intelligent and Soft Computing (Vol. 123, pp. 499–508). https://doi.org/10.1007/978-3-642-25661-5_62

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free