Exploring Explainable Selection to Control Abstractive Summarization

10Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Like humans, document summarization models can interpret a document's contents in a number of ways. Unfortunately, the neural models of today are largely black boxes that provide little explanation of how or why they generated a summary in the way they did. Therefore, to begin prying open the black box and to inject a level of control into the substance of the final summary, we developed a novel select-and-generate framework that focuses on explainability. By revealing the latent centrality and interactions between sentences, along with scores for sentence novelty and relevance, users are given a window into the choices a model is making and an opportunity to guide those choices in a more desirable direction. A novel pair-wise matrix captures the sentence interactions, centrality and attribute scores, and a mask with tunable attribute thresholds allows the user to control which sentences are likely to be included in the extraction. A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content. Additionally, the encoder is adaptable, supporting both Transformer- and BERTbased configurations. In a series of experiments assessed with ROUGE metrics and two human evaluations, ESCA outperformed eight state-of-the-art models on the CNN/DailyMail and NYT50 benchmark datasets.

References Powered by Scopus

Rethinking the Inception Architecture for Computer Vision

24046Citations
N/AReaders
Get full text

Get to the point: Summarization with pointer-generator networks

2617Citations
N/AReaders
Get full text

Rhetorical Structure Theory: Toward a functional theory of text organization

2534Citations
N/AReaders
Get full text

Cited by Powered by Scopus

CAVES: A Dataset to facilitate Explainable Classification and Summarization of Concerns towards COVID Vaccines

27Citations
N/AReaders
Get full text

Abstractive Summarization Guided by Latent Hierarchical Document Structure

12Citations
N/AReaders
Get full text

Is artificial intelligence capable of generating hospital discharge summaries from inpatient records?

10Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, H., Gao, Y., Bai, Y., Lapata, M., & Huang, H. (2021). Exploring Explainable Selection to Control Abstractive Summarization. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 15, pp. 13933–13941). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i15.17641

Readers over time

‘20‘21‘22‘23‘2505101520

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 15

75%

Researcher 4

20%

Professor / Associate Prof. 1

5%

Readers' Discipline

Tooltip

Computer Science 19

86%

Physics and Astronomy 1

5%

Business, Management and Accounting 1

5%

Engineering 1

5%

Save time finding and organizing research with Mendeley

Sign up for free
0