Compact and robust models for Japanese-English character-level machine translation

0Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Character-level translation has been proved to be able to achieve preferable translation quality without explicit segmentation, but training a character-level model needs a lot of hardware resources. In this paper, we introduced two character-level translation models which are mid-gated model and multi-attention model for Japanese-English translation. We showed that the mid-gated model achieved the better performance with respect to BLEU scores. We also showed that a relatively narrow beam of width 4 or 5 was sufficient for the mid-gated model. As for unknown words, we showed that the mid-gated model could somehow translate the one containing Katakana by coining out a close word. We also showed that the model managed to produce tolerable results for heavily noised sentences, even though the model was trained with the dataset without noise.

Cite

CITATION STYLE

APA

Dai, J., & Yamaguchi, K. (2021). Compact and robust models for Japanese-English character-level machine translation. In WAT@EMNLP-IJCNLP 2019 - 6th Workshop on Asian Translation, Proceedings (pp. 36–44). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-5202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free