Abstract
The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.
Cite
CITATION STYLE
Wang, K., Quan, X., & Wang, R. (2020). Biset: Bi-directional selective encoding with template for abstractive summarization. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2153–2162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1207
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.