FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness

8Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Despite being able to generate fluent and grammatical text, current Seq2Seq summarization models still suffering from the unfaithful generation problem. In this paper, we study the faithfulness of existing systems from a new perspective of factual robustness which is the ability to correctly generate factual information over adversarial unfaithful information. We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information. The factual robustness analysis on a wide range of current systems shows its good consistency with human judgments on faithfulness. Inspired by these findings, we propose to improve the faithfulness of a model by enhancing its factual robustness. Specifically, we propose a novel training strategy, namely FRSUM, which teaches the model to defend against both explicit adversarial samples and implicit factual adversarial perturbations. Extensive automatic and human evaluation results show that FRSUM consistently improves the faithfulness of various Seq2Seq models, such as T5, BART.

References Powered by Scopus

Towards Evaluating the Robustness of Neural Networks

6425Citations
N/AReaders
Get full text

Get to the point: Summarization with pointer-generator networks

2645Citations
N/AReaders
Get full text

The proposition bank: An annotated corpus of semantic roles

1675Citations
N/AReaders
Get full text

Cited by Powered by Scopus

WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning

6Citations
N/AReaders
Get full text

Promoting Topic Coherence and Inter-Document Consorts in Multi-Document Summarization via Simplicial Complex and Sheaf Graph

4Citations
N/AReaders
Get full text

Boundary-Aware Abstractive Summarization with Entity-Augmented Attention for Enhancing Faithfulness

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wu, W., Li, W., Liu, J., Xiao, X., Cao, Z., Li, S., & Wu, H. (2022). FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3640–3654). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.267

Readers over time

‘22‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

44%

Researcher 4

44%

Lecturer / Post doc 1

11%

Readers' Discipline

Tooltip

Computer Science 9

75%

Neuroscience 1

8%

Medicine and Dentistry 1

8%

Mathematics 1

8%

Save time finding and organizing research with Mendeley

Sign up for free
0