The Memory Challenge in Ultra-Low Power Deep Learning

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the key goals for the next decade is to push machine learning into sensors at the edge, for always-on operation within a sub-mW power budget. However, to achieve this goal, we need to address memory organization challenges, as current machine learning (ML) models (e.g., deep neural networks) have storage requirements for both weights and activations that are often not compatible with on-chip memories and/or low cost, low power external memory options. In this chapter, we outline key ideas, recent achievements, and directions for future research towards taming the memory bottleneck for ultra-low power ML circuits and systems.

Cite

CITATION STYLE

APA

Conti, F., Rusci, M., & Benini, L. (2020). The Memory Challenge in Ultra-Low Power Deep Learning. In Frontiers Collection (Vol. Part F1076, pp. 323–349). Springer VS. https://doi.org/10.1007/978-3-030-18338-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free