Class-based prediction errors to detect hate speech with out-of-vocabulary words

19Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

Abstract

Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. A paradigmatic example of this situation is abusive online behavior, with social networks and media platforms struggling to effectively combat uncommon or non-blacklisted hate words. To better deal with these issues in those fast-paced environments, we propose using the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class, and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the ability to describe seen documents to the ability to predict unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4-11%.

Cite

CITATION STYLE

APA

Serrà, J., Stringhini, G., Leontiadis, I., Blackburn, J., Spathis, D., & Vakali, A. (2017). Class-based prediction errors to detect hate speech with out-of-vocabulary words. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 36–40). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-3005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free