GPU Acceleration of Hermite Methods for the Simulation of Wave Propagation

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Hermite methods of Goodrich, Hagstrom, and Lorenz (2006) use Hermite interpolation to construct high order numerical methods for hyperbolic initial value problems. The structure of the method has several favorable features for parallel computing. In this work, we propose algorithms that take advantage of the many-core architecture of Graphics Processing Units. The algorithm exploits the compact stencil of Hermite methods and uses data structures that allow for efficient data load and stores. Additionally the highly localized evolution operator of Hermite methods allows us to combine multi-stage time-stepping methods within the new algorithms incurring minimal accesses of global memory. Using a scalar linear wave equation, we study the algorithm by considering Hermite interpolation and evolution as individual kernels and alternatively combined them into a monolithic kernel. For both approaches we demonstrate strategies to increase performance. Our numerical experiments show that although a two kernel approach allows for better performance on the hardware, a monolithic kernel can offer a comparable time to solution with less global memory usage.

Cite

CITATION STYLE

APA

Vargas, A., Chan, J., Hagstrom, T., & Warburton, T. (2017). GPU Acceleration of Hermite Methods for the Simulation of Wave Propagation. In Lecture Notes in Computational Science and Engineering (Vol. 119, pp. 357–368). Springer Verlag. https://doi.org/10.1007/978-3-319-65870-4_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free