Neuromorphic hardware based on emerging analog memory technologies, like resistive switching, show promise for dense energy efficient systems given their ultra-scalable footprint and better energy/bit consumption. Training of neuromorphic systems large enough for useful real-life applications is theoretically highly efficient, however it poses significant practical challenges due to device non-idealities, such as stochasticity, asymmetry, and nonlinearity. We investigate in detail mini-batch gradient descent in the context of device non-idealities using a 3-layer MLP on MNIST. Convergence curves for different batch sizes show a consistent increase in performance with the increased batch size, but is highly sensitive to the learning rate. Statistical analysis of the weight trajectories and histograms show different behavior between the software and the non-ideal hardware case, but further investigations are needed to understand the trends in the weight populations across the training for different batch sizes.
CITATION STYLE
Gao, Y., Wu, S., & Adam, G. C. (2020). Batch Training for Neuromorphic Systems with Device Non-idealities. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3407197.3407208
Mendeley helps you to discover research relevant for your work.