Most neural abstractive summarization models are capable of producing high-quality summaries. However, they still frequently contain factual errors. Existing factuality-oriented abstractive summarization models only consider the integration of factual information and ignore the causes of factual errors. To address this issue, we propose a factuality-oriented abstractive summarization model DASum, which is based on a new task factual relation discrimination that is able to identify the causes of factual errors. First, we use data augmentation methods to construct counterfactual summaries (i.e., negative samples), and build a factual summarization dataset. Then, we propose the factual relation discrimination task, which determines the factuality of the dependency relations in summaries during summary generation and guides our DASum to generate factual relations, thereby improving the factuality of summaries. Experimental results on the CNN/DM and XSUM datasets show that our DASum outperforms several state-of-the-art benchmarks in terms of the factual metrics.
CITATION STYLE
Gao, Z., Li, P., Jiang, F., Chu, X., & Zhu, Q. (2023). Factual Relation Discrimination for Factuality-oriented Abstractive Summarization. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 977–986). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.69
Mendeley helps you to discover research relevant for your work.