Network embedding algorithms to date are primarily de-signed for static networks, where all nodes are known before learning. How to infer embeddings for out-of-sample nodes, i.e. nodes that arrive after learning, remains an open prob-lem. The problem poses great challenges to existing meth-ods, since the inferred embeddings should preserve intri-cate network properties such as high-order proximity, share similar characteristics (i.e. be of a homogeneous space) with in-sample node embeddings, and be of low compu-tational cost. To overcome these challenges, we propose a Deeply Transformed High-order Laplacian Gaussian Process (DepthLGP) method to infer embeddings for out-of-sample nodes. DepthLGP combines the strength of nonparametric probabilistic modeling and deep learning. In particular, we design a high-order Laplacian Gaussian process (hLGP) to encode network properties, which permits fast and scalable inference. In order to further ensure homogeneity, we then employ a deep neural network to learn a nonlinear transfor-mation from latent states of the hLGP to node embeddings. DepthLGP is general, in that it is applicable to embeddings learned by any network embedding algorithms. We theoreti-cally prove the expressive power of DepthLGP, and conduct extensive experiments on real-world networks. Empirical re-sults demonstrate that our approach can achieve significant performance gain over existing approaches.
Ma, J., Cui, P., & Zhu, W. (2018). DepthLGP : Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks. AAAI 2018.