Sequence-to-sequence modeling for graph representation learning

6Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose sequence-to-sequence architectures for graph representation learning in both supervised and unsupervised regimes. Our methods use recurrent neural networks to encode and decode information from graph-structured data. Recurrent neural networks require sequences, so we choose several methods of traversing graphs using different types of substructures with various levels of granularity to generate sequences of nodes for encoding. Our unsupervised approaches leverage long short-term memory (LSTM) encoder-decoder models to embed the graph sequences into a continuous vector space. We then represent a graph by aggregating its graph sequence representations. Our supervised architecture uses an attention mechanism to collect information from the neighborhood of a sequence. The attention module enriches our model in order to focus on the subgraphs that are crucial for the purpose of a graph classification task. We demonstrate the effectiveness of our approaches by showing improvements over the existing state-of-the-art approaches on several graph classification tasks.

Cite

CITATION STYLE

APA

Taheri, A., Gimpel, K., & Berger-Wolf, T. (2019). Sequence-to-sequence modeling for graph representation learning. Applied Network Science, 4(1). https://doi.org/10.1007/s41109-019-0174-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free