Fighting adversarial attacks on online abusive language moderation

N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Lack of moderation in online conversations may result in personal aggression, harassment or cyberbullying. Such kind of hostility is usually expressed by using profanity or abusive language. On the basis of this assumption, recently Google has developed a machine-learning model to detect hostility within a comment. The model is able to assess to what extent abusive language is poisoning a conversation, obtaining a “toxicity” score for the comment. Unfortunately, it has been suggested that such a toxicity model can be deceived by adversarial attacks that manipulate the text sequence of the abusive language. In this paper we aim to fight this anomaly; firstly we characterise two types of adversarial attacks, one using obfuscation and the other using polarity transformations. Then, we propose a two–stage approach to disarm such attacks by coupling a text deobfuscation method and the toxicity scoring model. The approach was validated on a dataset of approximately 24000 distorted comments showing that it is feasible to restore the toxicity score of the adversarial variants. We anticipate that combining machine learning and text pattern recognition methods operating on different layers of linguistic features, will help to foster aggression–safe online conversations despite the adversary challenges inherent to the versatile nature of written language.

Cite

CITATION STYLE

APA

Rodriguez, N., & Rojas-Galeano, S. (2018). Fighting adversarial attacks on online abusive language moderation. In Communications in Computer and Information Science (Vol. 915, pp. 480–493). Springer Verlag. https://doi.org/10.1007/978-3-030-00350-0_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free