Improving Abstractive Summarization with Commonsense Knowledge

2Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

Large scale pretrained models have demonstrated strong performances on several natural language generation and understanding benchmarks. However, introducing commonsense into them to generate more realistic text remains a challenge. Inspired from previous work on commonsense knowledge generation and generative commonsense reasoning, we introduce two methods to add commonsense reasoning skills and knowledge into abstractive summarization models. Both methods beat the baseline on ROUGE scores, demonstrating the superiority of our models over the baseline. Human evaluation results suggest that summaries generated by our methods are more realistic and have fewer commonsensical errors.

Cite

CITATION STYLE

APA

Nair, P. A., & Singh, A. K. (2021). Improving Abstractive Summarization with Commonsense Knowledge. In International Conference Recent Advances in Natural Language Processing, RANLP (Vol. 2021-September, pp. 135–143). Incoma Ltd. https://doi.org/10.26615/issn.2603-2821.2021_019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free