Impact of analog memory device failure on in-memory computing inference accuracy

  • Li N
  • Tsai H
  • Narayanan V
  • et al.
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In-memory computing using analog non-volatile memory (NVM) devices can improve the speed and reduce the latency of deep neural network (DNN) inference. It has been recently shown that neuromorphic crossbar arrays, where each weight is implemented using analog conductance values of phase-change memory devices, achieve competitive accuracy and high power efficiency. However, due to the large amount of NVMs needed and the challenge for making analog NVM devices, these chips typically include some failed devices from fabrication or developed over time. We study the impact of these failed devices on the analog in-memory computing accuracy for various networks. We show that larger networks with fewer reused layers are more tolerable to failed devices. Devices stuck at high resistance states are more tolerable than devices stuck at low resistance states. To improve the robustness of DNNs to defective devices, we develop training methods that add noise and corrupt devices in the weight matrices during network training and show that this can increase the network accuracy in the presence of the failed devices. We also provide estimated maximum defective device tolerance of some common networks.

Cite

CITATION STYLE

APA

Li, N., Tsai, H., Narayanan, V., & Rasch, M. (2023). Impact of analog memory device failure on in-memory computing inference accuracy. APL Machine Learning, 1(1). https://doi.org/10.1063/5.0131797

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free