“You are grounded!”: Latent name artifacts in pre-trained language models

35Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g., Donald), which, depending on the corpus, may be associated with specific entities, as indicated by next token prediction (e.g., Trump). While helpful in some contexts, grounding happens also in underspecified or inappropriate contexts. For example, endings generated for 'Donald is a' substantially differ from those of other names, and often have more-than-average negative sentiment. We demonstrate the potential effect on downstream tasks with reading comprehension probes where name perturbation changes the model answers. As a silver lining, our experiments suggest that additional pre-training on different corpora may mitigate this bias.

Cite

CITATION STYLE

APA

Shwartz, V., Rudinger, R., & Tafjord, O. (2020). “You are grounded!”: Latent name artifacts in pre-trained language models. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 6850–6861). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.556

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free