Comparative Studies of Detecting Abusive Language on Twitter

40Citations
Citations of this article
167Readers
Mendeley users who have this article in their library.

Abstract

The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.

Cite

CITATION STYLE

APA

Lee, Y., Yoon, S., & Jung, K. (2018). Comparative Studies of Detecting Abusive Language on Twitter. In 2nd Workshop on Abusive Language Online - Proceedings of the Workshop, co-located with EMNLP 2018 (pp. 101–106). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free