Modularity optimization as a training criterion for graph neural networks

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Graph convolution is a recent scalable method for performing deep feature learning on attributed graphs by aggregating local node information over multiple layers. Such layers only consider attribute information of node neighbors in the forward model and do not incorporate knowledge of global network structure in the learning task. In particular, the modularity function provides a convenient source of information about the community structure of networks. In this work, we investigate the effect on the quality of learned representations by the incorporation of community structure preservation objectives of networks in the graph convolutional model. We incorporate the objectives in two ways, through an explicit regularization term in the cost function in the output layer and as an additional loss term computed via an auxiliary layer. We report the effect of community-structure-preserving terms in the graph convolutional architectures. Experimental evaluation on two attributed bibliographic networks showed that the incorporation of the community-preserving objective improves semi-supervised node classification accuracy in the sparse label regime.

Cite

CITATION STYLE

APA

Murata, T., & Afzal, N. (2018). Modularity optimization as a training criterion for graph neural networks. In Springer Proceedings in Complexity (Vol. 0, pp. 123–135). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-73198-8_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free