Improving node-level mapreduce performance using processing-in-memory technologies

9Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Processing-in-Memory (PIM) is the concept of moving computation as close as possible to memory. This decreases the need for the movement of data between central processor and memory system, hence improves energy efficiency from the reduced memory traffic. In this paper we present our approach on how to embed processing cores in 3D-stacked memories, and evaluate the use of such a system for Big Data analytics. We present a simple server architecture, which employs several energy efficient PIM cores in multiple 3D-DRAM units where the server acts as a node of a cluster for Big Data analyses utilizing MapReduce programming framework. Our preliminary analyses show that on a single node up to 23% energy savings on the processing units can be achieved while reducing execution time by up to 8.8%. Additional energy savings can result from simplifying the system memory buses. We believe such energy efficient systems with PIM capability will become viable in the near future because of the potential to scale the memory wall.

Cite

CITATION STYLE

APA

Islam, M., Scrbak, M., Kavi, K. M., Ignatowski, M., & Jayasena, N. (2014). Improving node-level mapreduce performance using processing-in-memory technologies. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8806, pp. 425–437). Springer Verlag. https://doi.org/10.1007/978-3-319-14313-2_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free