Analyzing Stereotypes in Generative Text Inference Tasks

13Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Stereotypes are inferences drawn about people based on their demographic attributes, which may result in harms to users when a system is deployed. In generative language-inference tasks, given a premise, a model produces plausible hypotheses that follow either logically (natural language inference) or commonsensically (commonsense inference). Such tasks are therefore a fruitful setting in which to explore the degree to which NLP systems encode stereotypes. In our work, we study how stereotypes manifest when the potential targets of stereotypes are situated in real-life, neutral contexts. We collect human judgments on the presence of stereotypes in generated inferences, and compare how perceptions of stereotypes vary due to annotator positionality.

Cite

CITATION STYLE

APA

Sotnikova, A., Cao, Y. T., Daumé, H., & Rudinger, R. (2021). Analyzing Stereotypes in Generative Text Inference Tasks. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4052–4065). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.355

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free