Despite the increasing popularity of deep neural networks (DNNs), they cannot be trained efficiently on existing platforms, and efforts have thus been devoted to designing dedicated hardware for DNNs. In our recent work, we have provided direct support for the stochastic gradient descent (SGD) training algorithm by constructing the basic element of neural networks, the synapse, using emerging technologies, namely memristors. Due to the limited performance of SGD, optimization algorithms are commonly employed in DNN training. Therefore, DNN accelerators that only support SGD might not meet DNN training requirements. In this paper, we present a memristor-based synapse that supports the commonly used momentum algorithm. Momentum significantly improves the convergence of SGD and facilitates the DNN training stage. We propose two design approaches to support momentum: 1) a hardware friendly modification of the momentum algorithm using memory external to the synapse structure, and 2) updating each synapse with a built-in memory. Our simulations show that the proposed DNN training solutions are as accurate as training on a GPU platform while speeding up the performance by 886 × and decreasing energy consumption by 7 ×, on average.
CITATION STYLE
Greenberg-Toledo, T., Mazor, R., Haj-Ali, A., & Kvatinsky, S. (2019). Supporting the Momentum Training Algorithm Using a Memristor-Based Synapse. IEEE Transactions on Circuits and Systems I: Regular Papers, 66(4), 1571–1583. https://doi.org/10.1109/TCSI.2018.2888538
Mendeley helps you to discover research relevant for your work.