Abstractive document summarization via bidirectional decoder

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sequence-to-sequence architecture with attention mechanism is widely used in abstractive text summarization, and has achieved a series of remarkable results. However, this method may suffer from error accumulation. That is to say, at the testing stage, the input of decoder is the word generated at the previous time, so that decoder-side error will be continuously amplified. This paper proposes a Summarization model using a Bidirectional decoder (BiSum), in which the backward decoder provides a reference for the forward decoder. We use attention mechanism at both encoder and backward decoder sides to ensure that the summary generated by backward decoder can be understood. Also, pointer mechanism is added in both the backward decoder and the forward decoder to solve the out-of-vocabulary problem. We remove the word segmentation step in regular Chinese preprocessing, which greatly improves the quality of summary. Experimental results show that our work can produce higher-quality summary on Chinese datasets TTNews and English datasets CNN/Daily Mail.

Cite

CITATION STYLE

APA

Wan, X., Li, C., Wang, R., Xiao, D., & Shi, C. (2018). Abstractive document summarization via bidirectional decoder. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11323 LNAI, pp. 364–377). Springer Verlag. https://doi.org/10.1007/978-3-030-05090-0_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free