Abstract
Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, ToxicTrap, introducing small word-level perturbations to fool SOTA text classifiers to predict toxic text samples as benign. ToxicTrap exploits greedy based search strategies to enable fast and effective generation of toxic adversarial examples. Two novel goal function designs allow ToxicTrap to identify weaknesses in both mul-ticlass and multilabel toxic language detectors. Our empirical results show that SOTA toxicity text classifiers are indeed vulnerable to the proposed attacks, attaining over 98% attack success rates in multilabel cases. We also show how a vanilla adversarial training and its improved version can help increase robustness of a toxicity detector even against unseen attacks.
Cite
CITATION STYLE
Bespalov, D., Bhabesh, S., Xiang, Y., Zhou, L., & Qi, Y. (2023). Towards Building a Robust Toxicity Predictor. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 581–598). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.56
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.