VAE-PGN based abstractive model in multi-stage architecture for text summarization

12Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

This paper describes our submission to the TL;DR challenge. Neural abstractive summarization models have been successful in generating fluent and consistent summaries with advancements like the copy (Pointer-generator) and coverage mechanisms. However, these models suffer from their extractive nature as they learn to copy words from the source text. In this paper, we propose a novel abstractive model based on Variational Autoencoder (VAE) to address this issue. We also propose a Unified Summarization Framework for the generation of summaries. Our model eliminates non-critical information at a sentence-level with an extractive summarization module and generates the summary word by word using an abstractive summarization module. To implement our framework, we combine submodules with state-of-the-art techniques including Pointer-Generator Network (PGN) and BERT while also using our new VAE-PGN abstractive model. We evaluate our model on the benchmark Reddit corpus as part of the TL;DR challenge and show that our model outperforms the baseline in ROUGE score while generating diverse summaries.

Cite

CITATION STYLE

APA

Choi, H., Ravuru, L., Dryjanski, T., Ryu, S., Lee, D., Lee, H., & Hwang, I. (2019). VAE-PGN based abstractive model in multi-stage architecture for text summarization. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 510–515). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/W19-8664

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free