Improving factual consistency of abstractive summarization via question answering

66Citations
Citations of this article
137Readers
Mendeley users who have this article in their library.

Abstract

A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents. The fact that automatic summarization may produce plausible-sounding yet inaccurate summaries is a major concern that limits its wide application. In this paper we present an approach to address factual consistency in summarization. We first propose an efficient automatic evaluation metric to measure factual consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training. Through extensive experiments, we confirm that our method is effective in improving factual consistency and even overall quality of the summaries, as judged by both automatic metrics and human evaluation.

References Powered by Scopus

Measuring nominal scale agreement among many raters

6613Citations
N/AReaders
Get full text

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

6432Citations
N/AReaders
Get full text

Abstractive text summarization using sequence-to-sequence RNNs and beyond

1430Citations
N/AReaders
Get full text

Cited by Powered by Scopus

SUMMAC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

232Citations
N/AReaders
Get full text

Q<sup>2</sup>: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

103Citations
N/AReaders
Get full text

An Empirical Survey on Long Document Summarization: Datasets, Models, and Metrics

74Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Nan, F., dos Santos, C. N., Zhu, H., Ng, P., McKeown, K., Nallapati, R., … Xiang, B. (2021). Improving factual consistency of abstractive summarization via question answering. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 6881–6894). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.536

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 43

73%

Researcher 10

17%

Lecturer / Post doc 5

8%

Professor / Associate Prof. 1

2%

Readers' Discipline

Tooltip

Computer Science 60

88%

Linguistics 5

7%

Psychology 2

3%

Neuroscience 1

1%

Save time finding and organizing research with Mendeley

Sign up for free