AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Deep learning frameworks optimize the computation graphs and intra-operator computations to boost the inference performance on GPUs, while inter-operator parallelism is usually ignored. In this paper, a unified framework, AutoGraph, is proposed to obtain highly optimized computation graphs in favor of parallel executions of GPU kernels. A novel dynamic programming algorithm, combined with backtracking search, is adopted to explore the optimal graph optimization solution, with the fast performance estimation from the mixed critical path cost. Accurate runtime information based on GPU Multi-Stream launched with CUDA Graph is utilized to determine the convergence of the optimization. Experimental results demonstrate that our method achieves up to 3.47× speedup over existing graph optimization methods. Moreover, AutoGraph outperforms state-of-the-art parallel kernel launch frameworks by up to 1.26×.

Cite

CITATION STYLE

APA

Zhao, Y., Sun, Q., He, Z., Bai, Y., & Yu, B. (2023). AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 11354–11362). AAAI Press. https://doi.org/10.1609/aaai.v37i9.26343

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free