Accelerating deep learning inference with cross-layer data reuse on GPUs

8Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Accelerating the deep learning inference is very important for real-time applications. In this paper, we propose a novel method to fuse the layers of convolutional neural networks (CNNs) on Graphics Processing Units (GPUs), which applies data reuse analysis and access optimization in different levels of the memory hierarchy. To achieve the balance between computation and memory access, we explore the fusion opportunities in the CNN computation graph and propose three fusion modes of convolutional neural networks: straight, merge and split. Then, an approach for generating efficient fused code is designed, which goes deeper in multi-level memory usage for cross-layer data reuse. The effectiveness of our method is evaluated with the network layers from state-of-the-art CNNs on two different GPU platforms, NVIDIA TITAN Xp and Tesla P4. The experiments show that the average speedup is 2.02 × on representative structures of CNNs, and 1.57× on end-to-end inference of SqueezeNet.

Cite

CITATION STYLE

APA

Wang, X., Li, G., Dong, X., Li, J., Liu, L., & Feng, X. (2020). Accelerating deep learning inference with cross-layer data reuse on GPUs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12247 LNCS, pp. 219–233). Springer. https://doi.org/10.1007/978-3-030-57675-2_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free