Train Once, and Decode As You Like

8Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. Our approach works by repeatedly selecting positions and generating tokens at these selected positions. After being trained once, our approach achieves better or competitive translation performance compared with some strong task-specific baseline models in all the settings. This generalization ability benefits mainly from the new training objective that we propose. We validate our approach on the WMT’14 English-German and IWSLT’14 German-English translation tasks. The experimental results are encouraging.

Cite

CITATION STYLE

APA

Tian, C., Wang, Y., Cheng, H., Lian, Y., & Zhang, Z. (2020). Train Once, and Decode As You Like. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 280–293). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free