Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages

12Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Hate speech is a global phenomenon, but most hate speech datasets so far focus on English-language content. This hinders the development of more effective hate speech detection models in hundreds of languages spoken by billions across the world. More data is needed, but annotating hateful content is expensive, time-consuming and potentially harmful to annotators. To mitigate these issues, we explore data-efficient strategies for expanding hate speech detection into under-resourced languages. In a series of experiments with mono- and multilingual models across five non-English languages, we find that 1) a small amount of target-language fine-tuning data is needed to achieve strong performance, 2) the benefits of using more such data decrease exponentially, and 3) initial fine-tuning on readily-available English data can partially substitute target-language data and improve model generalisability. Based on these findings, we formulate actionable recommendations for hate speech detection in low-resource language settings. Content warning: This article contains illustrative examples of hateful language.

Cite

CITATION STYLE

APA

Röttger, P., Nozza, D., Bianchi, F., & Hovy, D. (2022). Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 5674–5691). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.383

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free