Argot: Generating adversarial readable Chinese Texts

29Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural language processing (NLP) models are known vulnerable to adversarial examples, similar to image processing models. Studying adversarial texts is an essential step to improve the robustness of NLP models. However, existing studies mainly focus on analyzing English texts and generating adversarial examples for English texts. There is no work studying the possibility and effect of the transformation to another language, e.g, Chinese. In this paper, we analyze the differences between Chinese and English, and explore the methodology to transform the existing English adversarial generation method to Chinese. We propose a novel black-box adversarial Chinese texts generation solution Argot, by utilizing the method for adversarial English samples and several novel methods developed on Chinese characteristics. Argot could effectively and efficiently generate adversarial Chinese texts with good readability. Furthermore, Argot could also automatically generate targeted Chinese adversarial text, achieving a high success rate and ensuring readability of the Chinese.

Cite

CITATION STYLE

APA

Zhang, Z., Liu, M., Zhang, C., Zhang, Y., Li, Z., Li, Q., … Sun, D. (2020). Argot: Generating adversarial readable Chinese Texts. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2533–2539). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/351

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free