One of the key goals for the next decade is to push machine learning into sensors at the edge, for always-on operation within a sub-mW power budget. However, to achieve this goal, we need to address memory organization challenges, as current machine learning (ML) models (e.g., deep neural networks) have storage requirements for both weights and activations that are often not compatible with on-chip memories and/or low cost, low power external memory options. In this chapter, we outline key ideas, recent achievements, and directions for future research towards taming the memory bottleneck for ultra-low power ML circuits and systems.
CITATION STYLE
Conti, F., Rusci, M., & Benini, L. (2020). The Memory Challenge in Ultra-Low Power Deep Learning. In Frontiers Collection (Vol. Part F1076, pp. 323–349). Springer VS. https://doi.org/10.1007/978-3-030-18338-7_19
Mendeley helps you to discover research relevant for your work.