Graph Structural Attack by Perturbing Spectral Distance

26Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks. In this paper, an effective graph structural attack is investigated to disrupt graph spectral filters in the Fourier domain, which are the theoretical foundation of GCNs. We define the notion of spectral distance based on the eigenvalues of graph Laplacian to measure the disruption of spectral filters. We realize the attack by maximizing the spectral distance and propose an efficient approximation to reduce the time complexity brought by eigen-decomposition. The experiments demonstrate the remarkable effectiveness of the proposed attack in both black-box and white-box settings for both test-time evasion attacks and training-time poisoning attacks. Our qualitative analysis suggests the connection between the imposed spectral changes in the Fourier domain and the attack behavior in the spatial domain, which provides empirical evidence that maximizing spectral distance is an effective way to change the graph structural property and thus disturb the frequency components for graph filters to affect the learning of GCNs.

Cite

CITATION STYLE

APA

Lin, L., Blaser, E., & Wang, H. (2022). Graph Structural Attack by Perturbing Spectral Distance. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 989–998). Association for Computing Machinery. https://doi.org/10.1145/3534678.3539435

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free