Aggressive Language Detection with Joint Text Normalization via Adversarial Multi-task Learning

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Aggressive language detection (ALD), detecting the abusive and offensive language in texts, is one of the crucial applications in NLP community. Most existing works treat ALD as regular classification with neural models, while ignoring the inherent conflicts of social media text that they are quite unnormalized and irregular. In this work, we target improving the ALD by jointly performing text normalization (TN), via an adversarial multi-task learning framework. The private encoders for ALD and TN focus on the task-specific features retrieving, respectively, and the shared encoder learns the underlying common features over two tasks. During adversarial training, a task discriminator distinguishes the separate learning of ALD or TN. Experimental results on four ALD datasets show that our model outperforms all baselines under differing settings by large margins, demonstrating the necessity of joint learning the TN with ALD. Further analysis is conducted for a better understanding of our method.

Cite

CITATION STYLE

APA

Wu, S., Fei, H., & Ji, D. (2020). Aggressive Language Detection with Joint Text Normalization via Adversarial Multi-task Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12430 LNAI, pp. 683–696). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60450-9_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free