GPU code optimization using abstract kernel emulation and sensitivity analysis

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we develop an approach to GPU kernel optimization by focusing on identification of bottleneck resources and determining optimization parameters that can alleviate the bottleneck. Performance modeling for GPUs is done by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel's execution. The utility of the bottleneck analysis is demonstrated in two contexts: 1) Coupling the new bottleneck-driven optimization strategy with the OpenTuner auto-tuner: experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite demonstrate effectiveness. 2) Manual code optimization: two case studies illustrate the use of the bottleneck analysis to iteratively improve the performance of code from state-of-the-art domain-specific code generators.

Cite

CITATION STYLE

APA

Hong, C., Sukumaran-Rajam, A., Kim, J., Rawat, P. S., Krishnamoorthy, S., Pouchet, L. N., … Sadayappan, P. (2018). GPU code optimization using abstract kernel emulation and sensitivity analysis. ACM SIGPLAN Notices, 53(4), 736–751. https://doi.org/10.1145/3192366.3192397

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free