Automatic metrics for machine translation evaluation and minority languages

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Translation quality and its evaluation play a crucial role in the field of machine translation (MT). This paper focuses on the quality assessment of automatic metrics for MT evaluation. In our study we assess the reliability and validity of the following automatic metrics: Position-independent Error Rate (PER), Word Error Rate (WER) and Cover Disjoint Error Rate (CDER). These metrics define an error rate of MT output and also of MT system itself, in our case it is an on-line statistical MT system. The results of the reliability analysis showed that these automatic metrics for MT evaluation are reliable and valid, whereby the validity and reliability were verified for one translation direction: from the minority language (Slovak) into English.

Cite

CITATION STYLE

APA

Munková, D., & Munk, M. (2016). Automatic metrics for machine translation evaluation and minority languages. In Lecture Notes in Electrical Engineering (Vol. 381, pp. 631–636). Springer Verlag. https://doi.org/10.1007/978-3-319-30298-0_69

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free