On the Impact of Word Representation in Hate Speech and Offensive Language Detection and Explanation

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Online hate speech and offensive language have been widely recognized as critical social problems. To defend against this problem, several recent works have emerged that focus on the detection and explanation of hate speech and offensive language using machine learning approaches. Although these approaches are quite effective in the detection and explanation of hate speech and offensive language samples, they do not explore the impact of the representation of such samples. In this work, we introduce a novel, pronunciation-based representation of hate speech and offensive language samples to enable its detection with high accuracy. To demonstrate the effectiveness of our pronunciation-based representation, we extend an existing hate-speech and offensive language defense model based on deep Long Short-term Memory (LSTM) neural networks by using our pronunciation-based representation of hate speech and offensive language samples to train this model. Our work finds that the pronunciation-based presentation significantly reduces noise in the datasets and enhances the overall performance of the existing model.

Cite

CITATION STYLE

APA

Hu, R., Dorris, W., Vishwamitra, N., Luo, F., & Costello, M. (2020). On the Impact of Word Representation in Hate Speech and Offensive Language Detection and Explanation. In CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy (pp. 171–173). Association for Computing Machinery, Inc. https://doi.org/10.1145/3374664.3379535

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free