Flynn s Reconciliation: Automating the Register Cache Idiom for Cross-accelerator Programming

0Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A large portion of the recent performance increase in the High Performance Computing (HPC) and Machine Learning (ML) domains is fueled by accelerator cards. Many popular ML frameworks support accelerators by organizing computations as a computational graph over a set of highly optimized, batched general-purpose kernels. While this approach simplifies the kernels' implementation for each individual accelerator, the increasing heterogeneity among accelerator architectures for HPC complicates the creation of portable and extensible libraries of such kernels. Therefore, using a generalization of the CUDA community's warp register cache programming idiom, we propose a new programming idiom (CoRe) and a virtual architecture model (PIRCH), abstracting over SIMD and SIMT paradigms. We define and automate the mapping process from a single source to PIRCH's intermediate representation and develop backends that issue code for three different architectures: Intel AVX512, NVIDIA GPUs, and NEC SX-Aurora. Code generated by our source-to-source compiler for batched kernels, borG, competes favorably with vendor-tuned libraries and is up to 2× faster than hand-tuned kernels across architectures.

Cite

CITATION STYLE

APA

Thuerck, D., Weber, N., & Bifulco, R. (2021). Flynn s Reconciliation: Automating the Register Cache Idiom for Cross-accelerator Programming. ACM Transactions on Architecture and Code Optimization, 18(3). https://doi.org/10.1145/3458357

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free