Low-Resource Machine Translation Based on Asynchronous Dynamic Programming

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reinforcement learning has been proved to be effective in handling low resource machine translation tasks and different sampling methods of reinforcement learning affect the performance of the model. The reward for generating translation is determined by the scalability and iteration of the sampling strategy, so it is difficult for the model to achieve bias-variance trade-off. Therefore, according to the poor ability of the model to analyze the structure of the sequence in low-resource tasks, this paper proposes a neural machine translation model parameter optimization method for asynchronous dynamic programming training strategies. In view of the experience priority situation under the current strategy, each selective sampling experience not only improves the value of the experience state, but also avoids the high computational resource consumption inherent in traditional valuation methods (such as dynamic programming). We verify the Mongolian-Chinese and Uyghur-Chinese tasks on CCMT2019. The result shows that our method has improved the quality of low-resource neural machine translation model compared with general reinforcement learning methods, which fully demonstrates the effectiveness of our method.

Cite

CITATION STYLE

APA

Jia, X., Hou, H., Wu, N., Li, H., & Chang, X. (2021). Low-Resource Machine Translation Based on Asynchronous Dynamic Programming. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12869 LNAI, pp. 16–28). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-84186-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free