Abstract
To facilitate the co-design of next generation hardware architectures, it is critical to characterize the workloads of deep learning (DL) applications and assess their computational patterns on different levels of the execution stack. Time series prediction is one such DL application heavily used in areas that include critical decision making: ensuring power grid resiliency, climate forecasting, transportation infrastructure optimization, stock market prediction, etc. Unlike cross-sectional data (e.g. images), time-series data is inherently sequential, posing challenges to parallelization in the context of deep learning. In this paper, we developed a proxy application for deep learning based time-series application that uses spatio-temporal data from a dynamical system for model training and inference. We study the performance profiles of the associated computational patterns for both training and inference on four different levels: models (Long short-term Memory and Convolutional Neural Network), DL frameworks (Tensorflow, PyTorch), data-types (FP64, mixed-precision), and single-node dense GPU platforms (Nvidia™ DGX A100 and DGX-2 V100). Overall, our findings indicate that, in the context of multiple variants of our time-series prediction proxy application, computational profiles of Tensorflow and PyTorch mostly exhibit divergent overheads across GPU platforms. Our studies also demonstrate that associated data movement, transformation and combination can take more than 50% of the overall execution times. Both, source code and workload profiles are made public for community-use and future studies.
Author supplied keywords
Cite
CITATION STYLE
Jain, M., Ghosh, S., & Nandanoori, S. P. (2022). Workload characterization of a time-series prediction system for spatio-temporal data. In ACM International Conference Proceeding Series (pp. 159–168). Association for Computing Machinery. https://doi.org/10.1145/3528416.3530242
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.