In-memory computing is a computing scheme that integrates data storage and arithmetic computation functions. Resistive random access memory (RRAM) arrays with innovative peripheral circuitry provide the capability of performing vector-matrix multiplication beyond the basic Boolean logic. With such a memory–computation duality, RRAM-based in-memory computing enables an efficient hardware solution for matrix-multiplication-dependent neural networks and related applications. Herein, the recent development of RRAM nanoscale devices and the parallel progress on circuit and microarchitecture layers are discussed. Well suited for analog synapse and neuron implementation, RRAM device properties and characteristics are emphasized herein. 3D-stackable RRAM and on-chip training are introduced in large-scale integration. The circuit design and system organization of RRAM-based in-memory computing are essential to breaking the von Neumann bottleneck. These outcomes illuminate the way for the large-scale implementation of ultra-low-power and dense neural network accelerators.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Yan, B., Li, B., Qiao, X., Xue, C.-X., Chang, M., Chen, Y., & Li, H. (Helen). (2019). Resistive Memory‐Based In‐Memory Computing: From Device and Large‐Scale Integration System Perspectives. Advanced Intelligent Systems, 1(7). https://doi.org/10.1002/aisy.201900068