Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks

2Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The last few years have witnessed an exponential rise in the propagation of offensive text on social media. Identification of this text with high precision is crucial for the well-being of society. Most of the existing approaches tend to give high toxicity scores to innocuous statements (e.g., “I am a gay man”). These false positives result from over-generalization on the training data where specific terms in the statement may have been used in a pejorative sense (e.g., “gay”). Emphasis on such words alone can lead to discrimination against the classes these systems are designed to protect. In this paper, we address the problem of offensive language detection on Twitter, while also detecting the type and the target of the offense. We propose a novel approach called SyLSTM, which integrates syntactic features in the form of the dependency parse tree of a sentence and semantic features in the form of word embeddings into a deep learning architecture using a Graph Convolutional Network. Results show that the proposed approach significantly outperforms the state-of-the-art BERT model with orders of magnitude fewer number of parameters.

Cite

CITATION STYLE

APA

Goel, D., & Sharma, R. (2022). Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks. In SocialNLP 2022 - 10th International Workshop on Natural Language Processing for Social Media, Proceedings of the Workshop (pp. 34–43). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.socialnlp-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free