Are Abstractive Summarization Models truly 'Abstractive'? An Empirical Study to Compare the two Forms of Summarization

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic Text Summarization has seen a large paradigm shift from extractive methods to abstractive (or generation-based) methods in the last few years. This can be attributed to the availability of large autoregressive language models (Lewis et al., 2019; Zhang et al., 2019a) that have been shown to outperform extractive methods. In this work, we revisit extractive methods and study their performance against state of the art(SOTA) abstractive models. Through extensive studies, we notice that abstractive methods are not yet completely abstractive in their generated summaries. In addition to this finding, we propose an evaluation metric that could benefit the summarization research community to measure the degree of abstractiveness of a summary in comparison to their extractive counterparts. To confirm the generalizability of our findings, we conduct experiments on two summarization datasets using five powerful techniques in extractive and abstractive summarization and study their levels of abstraction.

Cite

CITATION STYLE

APA

Kumar, V. B., & Gangadharaiah, R. (2022). Are Abstractive Summarization Models truly “Abstractive”? An Empirical Study to Compare the two Forms of Summarization. In GEM 2022 - 2nd Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings of the Workshop (pp. 198–206). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gem-1.17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free