Learning from the worst: Dynamically generated datasets to improve online hate detection

98Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.

Abstract

We present a human-and-model-in-the-loop process for dynamically generating datasets and training better performing and more robust hate detection models. We provide a new dataset of ~40, 000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation. It includes ~15, 000 challenging perturbations and each hateful entry has fine-grained labels for the type and target of hate. Hateful entries make up 54% of the dataset, which is substantially higher than comparable datasets. We show that model performance is substantially improved using this approach. Models trained on later rounds of data collection perform better on test sets and are harder for annotators to trick. They also have better performance on HATECHECK, a suite of functional tests for online hate detection. We provide the code, dataset and annotation guidelines for other researchers to use.

Cite

CITATION STYLE

APA

Vidgen, B., Thrush, T., Waseem, Z., & Kiela, D. (2021). Learning from the worst: Dynamically generated datasets to improve online hate detection. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 1667–1682). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.132

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free