Understanding and Interpreting the Impact of User Context in Hate Speech Detection

29Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This work investigates the effects that such features can have on a detection model. Unlike previous research, we show that simple performance comparison does not expose the full impact of including contextualand user information. By leveraging explainability techniques, we show (1) that user features play a role in the model's decision and (2) how they affect the feature space learned by the model. Besides revealing that-and also illustrating why-user features are the reason for performance gains, we show how such techniques can be combined to better understand the model and to detect unintended bias.

Cite

CITATION STYLE

APA

Mosca, E., Wich, M., & Groh, G. (2021). Understanding and Interpreting the Impact of User Context in Hate Speech Detection. In SocialNLP 2021 - 9th International Workshop on Natural Language Processing for Social Media, Proceedings of the Workshop (pp. 91–102). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.socialnlp-1.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free