Inter-loop optimizations in RAJA using loop chains

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Typical parallelization approaches such as OpenMP and CUDA provide constructs for parallelizing and blocking for data locality for individual loops. By focusing on each loop separately, these approaches fail to leverage sources of data locality possible due to inter-loop data reuse. The loop chain abstraction provides a framework for reasoning about and applying inter-loop optimizations. In this work, we incorporate the loop chain abstraction into RAJA, a performance portability layer for high-performance computing applications. Using the loop-chain-extended RAJA, or RAJALC, developers can have the RAJA library apply loop transformations like loop fusion and overlapped tiling while maintaining the original structure of their programs. By introducing targeted symbolic evaluation capabilities, we can collect and cache data access information required to verify loop transformations. We evaluate the performance improvement and refactoring costs of our extension. Overall, our results demonstrate 85-98% of the performance improvements of hand-optimized kernels with dramatically fewer code changes.

Cite

CITATION STYLE

APA

Neth, B., Scogland, T. R. W., de Supinski, B. R., & Strout, M. M. (2021). Inter-loop optimizations in RAJA using loop chains. In Proceedings of the International Conference on Supercomputing (pp. 1–12). Association for Computing Machinery. https://doi.org/10.1145/3447818.3461665

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free