Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

17Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation. We study gender biases associated with the protagonist in model-generated stories. Such biases may be expressed either explicitly ("women can't park") or implicitly (e.g. an unsolicited male character guides her into a parking space). We focus on implicit biases, and use a commonsense reasoning engine to uncover them. Specifically, we infer and analyze the protagonist's motivations, attributes, mental states, and implications on others. Our findings regarding implicit biases are in line with prior work that studied explicit biases, for example showing that female characters' portrayal is centered around appearance, while male figures' focus on intellect.

Cite

CITATION STYLE

APA

Huang, T., Brahman, F., Shwartz, V., & Chaturvedi, S. (2021). Uncovering Implicit Gender Bias in Narratives through Commonsense Inference. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 3866–3873). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.326

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free