Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning

4Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Due to their similarity-based learning objectives, pretrained sentence encoders often internalize stereotypical assumptions that reflect the social biases that exist within their training corpora. In this paper, we describe several kinds of stereotypes concerning different communities that are present in popular sentence representation models, including pretrained next sentence prediction and contrastive sentence representation models. We compare such models to textual entailment models that learn language logic for a variety of downstream language understanding tasks. By comparing strong pretrained models based on text similarity with textual entailment learning, we conclude that the explicit logic learning with textual entailment can significantly reduce bias and improve the recognition of social communities, without an explicit de-biasing process. The code, model, and data associated with this work are publicly available at https://github.com/luohongyin/ESP.git.

Cite

CITATION STYLE

APA

Luo, H., & Glass, J. (2023). Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1235–1246). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free