Detecting malicious social robots with generative adversarial networks

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Malicious social robots, which are disseminators of malicious information on social networks, seriously affect information security and network environments. The detection of malicious social robots is a hot topic and a significant concern for researchers. A method based on classification has been widely used for social robot detection. However, this method of classification is limited by an unbalanced data set in which legitimate, negative samples outnumber malicious robots (positive samples), which leads to unsatisfactory detection results. This paper proposes the use of generative adversarial networks (GANs) to extend the unbalanced data sets before training classifiers to improve the detection of social robots. Five popular oversampling algorithms were compared in the experiments, and the effects of imbalance degree and the expansion ratio of the original data on oversampling were studied. The experimental results showed that the proposed method achieved better detection performance compared with other algorithms in terms of the F1 measure. The GAN method also performed well when the imbalance degree was smaller than 15%.

Cite

CITATION STYLE

APA

Wu, B., Liu, L., Dai, Z., Wang, X., & Zheng, K. (2019). Detecting malicious social robots with generative adversarial networks. KSII Transactions on Internet and Information Systems, 13(11), 5594–5615. https://doi.org/10.3837/tiis.2019.11.018

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free