Abstract
This paper presents a ferroelectric FET (FeFET)-based processing-in-memory (PIM) architecture to accelerate the inference of deep neural networks (DNNs). We propose a digital in-memory vector-matrix multiplication (VMM) engine design utilizing the FeFET crossbar to enable bit-parallel computation and eliminate analog-to-digital conversion in prior mixed-signal PIM designs. A dedicated hierarchical network-on-chip (H-NoC) is developed for input broadcasting and on-the-fly partial results processing, reducing the data transmission volume and latency. Simulations in 28-nm CMOS technology show 115× and 6.3× higher computing efficiency (GOPs/W) over desktop GPU (Nvidia GTX 1080Ti) and resistive random access memory (ReRAM)-based design, respectively.
Author supplied keywords
Cite
CITATION STYLE
Long, Y., Kim, D., Lee, E., Saha, P., Mudassar, B. A., She, X., … Mukhopadhyay, S. (2019). A Ferroelectric FET-Based Processing-in-Memory Architecture for DNN Acceleration. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 5(2), 113–122. https://doi.org/10.1109/JXCDC.2019.2923745
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.