A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents. The fact that automatic summarization may produce plausible-sounding yet inaccurate summaries is a major concern that limits its wide application. In this paper we present an approach to address factual consistency in summarization. We first propose an efficient automatic evaluation metric to measure factual consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training. Through extensive experiments, we confirm that our method is effective in improving factual consistency and even overall quality of the summaries, as judged by both automatic metrics and human evaluation.
CITATION STYLE
Nan, F., dos Santos, C. N., Zhu, H., Ng, P., McKeown, K., Nallapati, R., … Xiang, B. (2021). Improving factual consistency of abstractive summarization via question answering. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 6881–6894). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.536
Mendeley helps you to discover research relevant for your work.