Efficient primitives for standard tensor linear algebra

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This paper presents the design and implementation of lowlevel library to compute general sums and products over multi-dimensional arrays (tensors). Using only 3 low-level functions, the API at once generalizes core BLAS1-3 as well as eliminates the need for most tensor transpositions. Despite their relatively low operation count, we show that these transposition steps can become performance limiting in typical use cases for BLAS on tensors. The execution of the present API achieves peak performance on the same order of magnitude as for vendor-optimized GEMM by utilizing a code generator to output CUDA source code for all computational kernels. The outline for these kernels is a multi-dimensional generalization of the MAGMA BLAS matrix multiplication on GPUs. Separate transpositions steps can be skipped because every kernel allows arbitrary multidimensional transpositions of the arguments. The library, including its methodology and programming techniques, are made available in SLACK. Future improvements to the library include a high-level interface to translate directly from a LATEX-like equation syntax to a data-parallel computation.

Author supplied keywords

Cite

CITATION STYLE

APA

Rogers, D. M. (2016). Efficient primitives for standard tensor linear algebra. In ACM International Conference Proceeding Series (Vol. 17-21-July-2016). Association for Computing Machinery. https://doi.org/10.1145/2949550.2949580

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free