ABARUAH at SemEval-2019 task 5: Bi-directional LSTM for hate speech detection

20Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present the results obtained using bi-directional long short-term memory (BiLSTM) with and without attention and Logistic Regression (LR) models for SemEval-2019 Task 5 titled”HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter”. This paper presents the results obtained for Subtask A for English language. The results of the BiLSTM and LR models are compared for two different types of preprocessing. One with no stemming performed and no stopwords removed. The other with stemming performed and stopwords removed. The BiLSTM model without attention performed the best for the first test, while the LR model with character n-grams performed the best for the second test. The BiLSTM model obtained an F1 score of 0.51 on the test set and obtained an official ranking of 8/71.

Cite

CITATION STYLE

APA

Baruah, A., Barbhuiya, F. A., & Dey, K. (2019). ABARUAH at SemEval-2019 task 5: Bi-directional LSTM for hate speech detection. In NAACL HLT 2019 - International Workshop on Semantic Evaluation, SemEval 2019, Proceedings of the 13th Workshop (pp. 371–376). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s19-2065

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free