Deep neural networks, while generalize well, are known to be sensitive to small adversarial perturbations. This phenomenon poses severe security threat and calls for in-depth investigation of the robustness of deep learning models. With the emergence of neural networks for graph structured data, similar investigations are urged to understand their robustness. It has been found that adversarially perturbing the graph structure and/or node features may result in a significant degradation of the model performance. In this work, we show from a different angle that such fragility similarly occurs if the graph contains a few bad-actor nodes, which compromise a trained graph neural network through flipping the connections to any targeted victim. Worse, the bad actors found for one graph model severely compromise other models as well. We call the bad actors “anchor nodes” and propose an algorithm, named GUA, to identify them. Thorough empirical investigations suggest an interesting finding that the anchor nodes often belong to the same class; and they also corroborate the intuitive tradeoff between the number of anchor nodes and the attack success rate. For the dataset Cora which contains 2708 nodes, as few as six anchor nodes will result in an attack success rate higher than 80% for GCN and other three models.
CITATION STYLE
Zang, X., Xie, Y., Chen, J., & Yuan, B. (2021). Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3328–3334). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/458
Mendeley helps you to discover research relevant for your work.