Multi-view joint graph representation learning for urban region embedding

109Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The increasing amount of urban data enables us to investigate urban dynamics, assist urban planning, and, eventually, make our cities more livable and sustainable. In this paper, we focus on learning an embedding space from urban data for urban regions. For the first time, we propose a multi-view joint learning model to learn comprehensive and representative urban region embeddings. We first model different types of region correlations based on both human mobility and inherent region properties. Then, we apply a graph attention mechanism in learning region representations from each view of the built correlations. Moreover, we introduce a joint learning module that boosts the region embedding learning by sharing cross-view information and fuses multi-view embeddings by learning adaptive weights. Finally, we exploit the learned embeddings in the downstream applications of land usage classification and crime prediction in urban areas with real-world data. Extensive experiment results demonstrate that by exploiting our proposed joint learning model, the performance is improved by a large margin on both tasks compared with the state-of-the-art methods.

Cite

CITATION STYLE

APA

Zhang, M., Li, T., Li, Y., & Hui, P. (2020). Multi-view joint graph representation learning for urban region embedding. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 4431–4437). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/611

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free