Fairness of Extractive Text Summarization

21Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose to evaluate extractive summarization algorithms from a completely new perspective. Considering that an extractive summarization algorithm selects a subset of the textual units in the input data for inclusion in the summary, we investigate whether this selection is fair. We use several summarization algorithms over datasets that have a sensitive attribute (e.g., gender, political leaning) associated with the textual units, and find that the generated summaries often have very different distributions of the said attribute. Specifically, some classes of the textual units are under-represented in the summaries according to the fairness notion of adverse impact. To our knowledge, this is the first work on fairness of summarization, and is likely to open up interesting research problems.

Cite

CITATION STYLE

APA

Shandilya, A., Ghosh, K., & Ghosh, S. (2018). Fairness of Extractive Text Summarization. In The Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018 (pp. 97–98). Association for Computing Machinery, Inc. https://doi.org/10.1145/3184558.3186947

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free