Modeling the dynamics into graph neural networks (GNNs) contributes to the understanding of evolution in dynamic graphs, which helps optimize temporal-spatial representations for real-world dynamic network problems. Empirically, dynamic GNN embedding requires additional temporal encoders, which inevitably introduces additional learning parameters to make dynamic GNNs oversized and inefficient. Furthermore, previous dynamic GNN models are under the same fixed temporal term, which causes the short-temporal optimum. To address these issues, we propose the WinGNN framework to model dynamic graphs, which is realized by a simple GNN model with the meta-learning strategy and a novel mechanism of random gradient aggregation. WinGNN calculates the frame-wise loss of the current snapshot and passes the loss gradient to the next to model graph dynamics without temporal encoders. Then it introduces the randomized sliding-window to acquire the window-aware gradienton consecutive snapshots, and the calculated two types of gradient are aggregated to update the GNN, thereby reducing the parameter size and improving the robustness. Experiments on six public datasets show the advantage of our WinGNN compared with existing baselines, where it has reached the optimum in twenty-two out of twenty-four performance metrics.
CITATION STYLE
Zhu, Y., Cong, F., Zhang, D., Gong, W., Lin, Q., Feng, W., … Tang, J. (2023). WinGNN: Dynamic Graph Neural Networks with Random Gradient Aggregation Window. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 3650–3662). Association for Computing Machinery. https://doi.org/10.1145/3580305.3599551
Mendeley helps you to discover research relevant for your work.