How can deep neural networks be generated efficiently for devices with limited resources?

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the increasing hardware capabilities of embedded devices, running a Deep Neural Network (DNN) in such systems remains a challenge. As the trend in DNNs is to design more complex architectures, the computation time in low-resource devices increases dramatically due to their low memory capabilities. Moreover, the physical memory used to store the network parameters augments with its complexity, hindering a feasible model to be deployed in the target hardware. Although a compressed model helps reducing RAM consumption, a large amount of consecutive deep layers increases the computation time. Despite the wide literature about DNN optimization, there is a lack of documentation for practical and efficient deployment of these networks. In this paper, we propose an efficient model generation by analyzing the parameters and their impact and address the design of a simple and comprehensive pipeline for optimal model deployment.

Cite

CITATION STYLE

APA

Elordi, U., Unzueta, L., Arganda-Carreras, I., & Otaegui, O. (2018). How can deep neural networks be generated efficiently for devices with limited resources? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10945 LNCS, pp. 24–33). Springer Verlag. https://doi.org/10.1007/978-3-319-94544-6_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free