Warning: content in this paper may be upsetting or offensive to some readers. Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For example, in the sentence “we need to end the cosmopolitan experiment,” the word “cosmopolitan” likely means “worldly” to many, but secretly means “Jewish” to a select few. We present the first large-scale computational investigation of dogwhistles. We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles with rich contextual information and examples, and analyze their usage in historical U.S. politicians' speeches. We then assess whether a large language model (GPT-3) can identify dogwhistles and their meanings, and find that GPT-3's performance varies widely across types of dogwhistles and targeted groups. Finally, we show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks of such coded language. This work sheds light on the theoretical and applied importance of dogwhistles in both NLP and computational social science, and provides resources for future research in modeling dogwhistles and mitigating their online harms.
CITATION STYLE
Mendelsohn, J., Le Bras, R., Choi, Y., & Sap, M. (2023). From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 15162–15180). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.845
Mendeley helps you to discover research relevant for your work.