This study assesses, automatically and manually, the performance of two hybrid machine translation (HMT) systems, via a text corpus of questions in the Spanish and English languages. The results show that human evaluation metrics are more reliable when evaluating HMT performance. Further, there is evidence that MT can streamline the translation process for specific types of texts, such as questions; however, it does not yet rival the quality of human translations, to which post-editing is key in this process.
CITATION STYLE
Gutiérrez-Artacho, J., Olvera-Lobo, M. D., & Rivera-Trigueros, I. (2018). Human post-editing in hybrid machine translation systems: Automatic and manual analysis and evaluation. In Advances in Intelligent Systems and Computing (Vol. 745, pp. 254–263). Springer Verlag. https://doi.org/10.1007/978-3-319-77703-0_26
Mendeley helps you to discover research relevant for your work.