Exploring processing in-memory for different technologies

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

The recent emergence of IoT has led to a substantial increase in the amount of data processed. Today, a large number of applications are data intensive, involving massive data transfers between processing core and memory. These transfers act as a bottleneck mainly due to the limited data bandwidth between memory and the processing core. Processing in memory (PIM) avoids this latency problem by doing computations at the source of data. In this paper, we propose designs which enable PIM in the three major memory technologies, i.e. SRAM, DRAM, and the newly emerging non-volatile memories (NVMs). We exploit the analog properties of different memories to implement simple logic functions, namely OR, AND, and majority inside memory. We then extend them further to implement in-memory addition and multiplication. We compare the three memory technologies with GPU by running general applications on them. Our evaluations show that SRAM, NVM, and DRAM are 29.8x (36.3x), 17.6x (20.3x) and 1.7x (2.7x) better in performance (energy consumption) as compared to AMD GPU.

Cite

CITATION STYLE

APA

Gupta, S., Imani, M., & Rosing, T. (2019). Exploring processing in-memory for different technologies. In Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI (pp. 201–206). Association for Computing Machinery. https://doi.org/10.1145/3299874.3317977

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free