Human post-editing in hybrid machine translation systems: Automatic and manual analysis and evaluation

N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study assesses, automatically and manually, the performance of two hybrid machine translation (HMT) systems, via a text corpus of questions in the Spanish and English languages. The results show that human evaluation metrics are more reliable when evaluating HMT performance. Further, there is evidence that MT can streamline the translation process for specific types of texts, such as questions; however, it does not yet rival the quality of human translations, to which post-editing is key in this process.

Cite

CITATION STYLE

APA

Gutiérrez-Artacho, J., Olvera-Lobo, M. D., & Rivera-Trigueros, I. (2018). Human post-editing in hybrid machine translation systems: Automatic and manual analysis and evaluation. In Advances in Intelligent Systems and Computing (Vol. 745, pp. 254–263). Springer Verlag. https://doi.org/10.1007/978-3-319-77703-0_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free