Reviewing inference performance of state-of-the-art deep learning frameworks

18Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning models have replaced conventional methods for machine learning tasks. Efficient inference on edge devices with limited resources is key for broader deployment. In this work, we focus on the tool selection challenge for inference deployment. We present an extensive evaluation of the inference performance of deep learning software tools using state-of-the-art CNN architectures for multiple hardware platforms. We benchmark these hardware-software pairs for a broad range of network architectures, inference batch sizes, and floating-point precision, focusing on latency and throughput. Our results reveal interesting combinations for optimal tool selection, resulting in different optima when considering minimum latency and maximum throughput.

Cite

CITATION STYLE

APA

Ulker, B., Stuijk, S., Corporaal, H., & Wijnhoven, R. (2020). Reviewing inference performance of state-of-the-art deep learning frameworks. In Proceedings of the 23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020 (pp. 48–53). Association for Computing Machinery, Inc. https://doi.org/10.1145/3378678.3391882

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free