Speedup Robust Graph Structure Learning with Low-Rank Information

27Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent studies have shown that graph neural networks (GNNs) are vulnerable to unnoticeable adversarial perturbations, which largely confines their deployment in many safety-critical domains. Robust graph structure learning has been proposed to improve the GNN performance in the face of adversarial attacks. In particular, the low-rank methods are utilized to purify the perturbed graphs. However, these methods are mostly computationally expensive with O(n3) time complexity and O(n2) space complexity. We propose LRGNN, a fast and robust graph structure learning framework, which exploits the low-rank property as prior knowledge to speed up optimization. To eliminate adversarial perturbation, LRGNN decouples the adjacency matrix into a low-rank component and a sparse one, and learns by minimizing the rank of the first part while suppressing the second part. Its sparse variant is formed to reduce the memory footprint further. Experimental results on various attack settings have shown LRGNN acquires comparable robustness with the state-of-the-art much more efficiently, boasting a significant advantage on large-scale graphs.

Cite

CITATION STYLE

APA

Xu, H., Xiang, L., Yu, J., Cao, A., & Wang, X. (2021). Speedup Robust Graph Structure Learning with Low-Rank Information. In International Conference on Information and Knowledge Management, Proceedings (pp. 2241–2250). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482299

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free