Query focused summarization (QFS) models aim to generate summaries from source documents that can answer the given query. Most previous work on QFS only considers the query relevance criterion when producing the summary. However, studying the effect of answer relevance in the summary generating process is also important. In this paper, we propose QFS-BART, a model that incorporates the explicit answer relevance of the source documents given the query via a question answering model, to generate coherent and answer-related summaries. Furthermore, our model can take advantage of large pre-trained models which improve the summarization performance significantly. Empirical results on the Debatepedia dataset show that the proposed model achieves the new state-of-the-art performance.
CITATION STYLE
Su, D., Yu, T., & Fung, P. (2021). Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3124–3131). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.275
Mendeley helps you to discover research relevant for your work.