In recent times, many real-world networks are naturally modeled as temporal networks, such as neural connection in biological networks over time, the interaction between friends at different time in social networks, etc. To visualize and analysis these temporal networks, core decomposition is an efficient strategy to distinguish the relative “importance” of nodes. Existing works mostly focus on core decomposition in non-temporal networks and pursue efficient CPU-based approaches. However, applying these works in temporal networks makes core decomposition an already computationally expensive task. In this paper, we propose two novel acceleration methods of core decomposition in the large temporal networks using the high parallelism of GPU. From the evaluation results, the proposed acceleration methods achieve maximum 4.1 billions TEPS (traversed edges per second), which corresponds to up to 26.6 × speedup compared to a single threaded CPU execution.
CITATION STYLE
Zhang, H., Hou, H., Zhang, L., Zhang, H., & Wu, Y. (2017). Accelerating core decomposition in large temporal networks using GPUs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10634 LNCS, pp. 893–903). Springer Verlag. https://doi.org/10.1007/978-3-319-70087-8_91
Mendeley helps you to discover research relevant for your work.