Profiling and optimizing deep learning inference on mobile GPUs

10Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Mobile GPU, as the ubiquitous computing hardware on almost every smartphone, is being exploited for the deep learning inference. In this paper, we present our measurements on the inference performance with mobile GPUs. Our observations suggest that mobile GPUs are underutilized. We study the inefficient issue in depth and find that one of root causes is the improper partition of compute workload. To solve this, we propose a heuristics-based workload partitioning approach, considering both performance and overheads on mobile devices. Evaluation results show that our approach reduces the inference latency by up to 32.8% on various devices and neural networks.

Cite

CITATION STYLE

APA

Jiang, S., Ran, L., Cao, T., Xu, Y., & Liu, Y. (2020). Profiling and optimizing deep learning inference on mobile GPUs. In APSys 2020 - Proceedings of the 2020 ACM SIGOPS Asia-Pacific Workshop on Systems (pp. 75–81). Association for Computing Machinery. https://doi.org/10.1145/3409963.3410493

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free