Cascaded attention based unsupervised information distillation for compressive summarization

37Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the number of output vectors, we can generate condensed information which can be treated as word salience. Fine-grained and coarse-grained sentence compression strategies are incorporated to produce compressive summaries. Experiments on some benchmark data sets show that our framework achieves better results than the state-of-the-art methods.

Cite

CITATION STYLE

APA

Li, P., Lam, W., Bing, L., Guo, W., & Li, H. (2017). Cascaded attention based unsupervised information distillation for compressive summarization. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2081–2090). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1221

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free