Sequence generation: From both sides to the middle

16Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The encoder-decoder framework has achieved promising process for many sequence generation tasks, such as neural machine translation and text summarization. Such a framework usually generates a sequence token by token from left to right, hence (1) this autoregressive decoding procedure is time-consuming when the output sentence becomes longer, and (2) it lacks the guidance of future context which is crucial to avoid under-translation. To alleviate these issues, we propose a synchronous bidirectional sequence generation (SBSG) model which predicts its outputs from both sides to the middle simultaneously. In the SBSG model, we enable the left-to-right (L2R) and right-to-left (R2L) generation to help and interact with each other by leveraging interactive bidirectional attention network. Experiments on neural machine translation (En-De, Ch-En, and En-Ro) and text summarization tasks show that the proposed model significantly speeds up decoding while improving the generation quality compared to the autoregressive Transformer.

Cite

CITATION STYLE

APA

Zhou, L., Zhang, J., Zong, C., & Yu, H. (2019). Sequence generation: From both sides to the middle. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5471–5477). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/760

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free