Tensor decomposition-based node embedding

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, node embedding algorithms, which learn low dimensional vector representations for nodes in a graph, have been one of the key research interests of the graph mining community. The existing algorithms either rely on computationally expensive eigendecomposition of the large matrices, or require tuning of the word embedding-based hyperparameters as a result of representing the graph as a node sequence similar to the sentences in a document. Moreover, the latent features produced by these algorithms are hard to interpret. In this paper, we present Tensor Decomposition-based Node Embedding (TDNE), a novel model for learning node representations for arbitrary types of graphs: undirected, directed, and/or weighted. Our model preserves the local and global structural properties of a graph by constructing a third-order tensor using the k-step transition probability matrices and decomposing the tensor through CANDECOMP/PARAFAC (CP) decomposition in order to produce an interpretable, low dimensional vector space for the nodes. Our experimental evaluation using two well-known social network datasets proves TDNE to be interpretable with respect to the understandability of the feature space, and precise with respect to the network reconstruction.

Cite

CITATION STYLE

APA

Hamdi, S. M., Boubrahimi, S. F., & Angryk, R. (2019). Tensor decomposition-based node embedding. In International Conference on Information and Knowledge Management, Proceedings (pp. 2105–2108). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358127

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free