A Syntax-Augmented and Headline-Aware Neural Text Summarization Method

17Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the advent of the information age excessive information collection leads to information overload. Automatic text summarization technology has become an effective way to solve information overload. This paper proposes an automatic text summarization model which extends traditional sequence-to-sequence (Seq2Seq) neural text summarization model by using a syntax-augmented encoder and a headline-aware decoder. The encoder encodes both syntactic structure and word information of a sentence in the sentence embedding. A hierarchical attention mechanism is proposed to pay attentions to syntactic units. The decoder is improved by a headline attention mechanism and a Dual-memory-cell LSTM network to achieve a higher quality of generated summaries. We designed experiments to compare the proposed method with baseline models on the CNN/DM datasets. The experiment results show that the proposed method is superior to abstractive baseline models in terms of the scores on ROUGE evaluation metrics and achieve a summary generation performance comparable to the extractive baseline method. Though qualitative analysis the summary quality of the propose method is more readable and less redundant which agrees well with our intuition.

Cite

CITATION STYLE

APA

Cheng, J., Zhang, F., & Guo, X. (2020). A Syntax-Augmented and Headline-Aware Neural Text Summarization Method. IEEE Access, 8, 218360–218371. https://doi.org/10.1109/ACCESS.2020.3042886

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free