Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback

32Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

Despite the seeming success of contemporary grounded text generation systems, they often tend to generate factually inconsistent text with respect to their input. This phenomenon is emphasized in tasks like summarization, in which the generated summaries should be corroborated by their source article. In this work we leverage recent progress on textual entailment models to directly address this problem for abstractive summarization systems. We use reinforcement learning with reference-free, textual-entailment rewards to optimize for factual consistency and explore the ensuing tradeoffs, as improved consistency may come at the cost of less informative or more extractive summaries. Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience and conciseness of the generated summaries.

Cite

CITATION STYLE

APA

Roit, P., Ferret, J., Shani, L., Aharoni, R., Cideron, G., Dadashi, R., … Szpektor, I. (2023). Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6252–6272). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.344

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free