CTRLSUM: Towards Generic Controllable Text Summarization

55Citations
Citations of this article
146Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current summarization systems yield generic summaries that are disconnected from users' preferences and expectations. To address this limitation, we present CTRLSUM, a generic framework to control generated summaries through a set of keywords. During training keywords are extracted automatically without requiring additional human annotations. At test time CTRLSUM features a control function to map control signal to keywords; through engineering the control function, the same trained model is able to be applied to control summaries on various dimensions, while neither affecting the model training process nor the pretrained models. We additionally explore the combination of keywords and text prompts for more control tasks. Experiments demonstrate the effectiveness of CTRLSUM on three domains of summarization datasets and five control tasks: (1) entity-centric and (2) length-controllable summarization, (3) contribution summarization on scientific papers, (4) invention purpose summarization on patent filings, and (5) question-guided summarization on news articles. Moreover, when used in a standard, unconstrained summarization setting, CTRLSUM is comparable or better than strong pretrained systems.

Cite

CITATION STYLE

APA

He, J., Kryściński, W., McCann, B., Rajani, N., & Xiong, C. (2022). CTRLSUM: Towards Generic Controllable Text Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 5879–5915). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.396

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free