Learning graph structure for graph neural networks (GNNs) is crucial to facilitate the GNN-based downstream learning tasks. It is challenging due to the non-differentiable discrete graph structure and lack of ground-truth. In this paper, we address these problems and propose a novel graph structure learning framework for GNNs. Firstly, we directly model the continuous graph structure with dual-normalization, which implicitly imposes sparse constraint and reduces the influence of noisy edges. Secondly, we formulate the whole learning process as a bilevel programming problem, where the inner objective is to optimize the GNN given the learned graph, while the outer objective is to optimize the graph structure to minimize the generalization error on downstream task. Moreover, for bilevel optimization, we propose an improved Neumann-IFT algorithm to obtain an approximate solution, which is more stable and accurate than existing optimization methods. Besides, it makes the bilevel optimization process memory-efficient and scalable to large graphs. Experiments on node classification and scene graph generation show that our method can outperform related methods, especially with noisy graphs.
CITATION STYLE
Hu, M., Chang, H., Ma, B., & Shan, S. (2022). Learning Continuous Graph Structure with Bilevel Programming for Graph Neural Networks. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3057–3063). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/424
Mendeley helps you to discover research relevant for your work.