GPOP: A Scalable Cache-and Memory-efficient Framework for Graph Processing over Parts

26Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The past decade has seen the development of many shared-memory graph processing frameworks intended to reduce the effort of developing high-performance parallel applications. However, many of these frameworks, based on Vertex-centric or Edge-centric paradigms suffer from several issues, such as poor cache utilization, irregular memory accesses, heavy use of synchronization primitives, or theoretical inefficiency, that deteriorate over-all performance and scalability. Recently, we proposed a cache and memory-efficient partition-centric paradigm for computing PageRank [26]. In this article, we generalize this approach to develop a novel Graph Processing Over Parts (GPOP) framework that is cache efficient, scalable, and work efficient. GPOP induces locality in memory accesses by increasing granularity of execution to vertex subsets called "parts," thereby dramatically improving the cache performance of a variety of graph algorithms. It achieves high scalability by enabling completely lock and atomic free computation. GPOP's built-in analytical performance model enables it to use a hybrid of source and part-centric communication modes in a way that ensures work efficiency each iteration, while simultaneously boosting high bandwidth sequential memory accesses. Finally, the GPOP framework is designed with programmability in mind. It completely abstracts away underlying parallelism and programming model details from the user and provides an easy to program set of APIs with the ability to selectively continue the active vertex set across iterations. Such functionality is useful for many graph algorithms but not intrinsically supported by the current frameworks. We extensively evaluate the performance of GPOP for a variety of graph algorithms, using several large datasets. We observe that GPOP incurs up to 9×, 6.8×, and 5.5× less L2 cache misses compared to Ligra, GraphMat, and Galois, respectively. In terms of execution time, GPOP is up to 19×, 9.3×, and 3.6× faster than Ligra, GraphMat, and Galois, respectively.

Cite

CITATION STYLE

APA

Lakhotia, K., Kannan, R., Pati, S., & Prasanna, V. (2020). GPOP: A Scalable Cache-and Memory-efficient Framework for Graph Processing over Parts. ACM Transactions on Parallel Computing, 7(1). https://doi.org/10.1145/3380942

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free