Accelerating core decomposition in large temporal networks using GPUs

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent times, many real-world networks are naturally modeled as temporal networks, such as neural connection in biological networks over time, the interaction between friends at different time in social networks, etc. To visualize and analysis these temporal networks, core decomposition is an efficient strategy to distinguish the relative “importance” of nodes. Existing works mostly focus on core decomposition in non-temporal networks and pursue efficient CPU-based approaches. However, applying these works in temporal networks makes core decomposition an already computationally expensive task. In this paper, we propose two novel acceleration methods of core decomposition in the large temporal networks using the high parallelism of GPU. From the evaluation results, the proposed acceleration methods achieve maximum 4.1 billions TEPS (traversed edges per second), which corresponds to up to 26.6 × speedup compared to a single threaded CPU execution.

Cite

CITATION STYLE

APA

Zhang, H., Hou, H., Zhang, L., Zhang, H., & Wu, Y. (2017). Accelerating core decomposition in large temporal networks using GPUs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10634 LNCS, pp. 893–903). Springer Verlag. https://doi.org/10.1007/978-3-319-70087-8_91

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free