Architecture-circuit-technology co-optimization for resistive random access memory-based computation-in-memory chips

16Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Computation-in-memory (CIM) chips offer an energy-efficient approach to artificial intelligence computing workloads. Resistive random-access memory (RRAM)-based CIM chips have proven to be a promising solution for overcoming the von Neumann bottleneck. In this paper, we review our recent studies on the architecture-circuit-technology co-optimization of scalable CIM chips and related hardware demonstrations. To further minimize data movements between memory and computing units, architecture optimization methods have been introduced. Then, we propose a device-architecture-algorithm co-design simulator to provide guidelines for designing CIM systems. A physics-based compact RRAM model and an array-level analog computing model were embedded in the simulator. In addition, a CIM compiler was proposed to optimize the on-chip dataflow. Finally, research perspectives are proposed for future development.

Cite

CITATION STYLE

APA

Liu, Y., Gao, B., Tang, J., Wu, H., & Qian, H. (2023). Architecture-circuit-technology co-optimization for resistive random access memory-based computation-in-memory chips. Science China Information Sciences, 66(10). https://doi.org/10.1007/s11432-023-3785-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free