Dynamic time warping (DTW) has been applied to a wide range of machine learning problems involving the comparison of time series. An important feature of such time series is that they can sometimes be sparse in the sense that the data takes zero value at many epochs. This corresponds for example to quiet periods in speech or to a lack of physical or dietary activity. However, employing conventional DTW for such sparse time series runs a full search ignoring the zero data. In this paper we focus on the development and analysis of a fast dynamic time warping algorithm that is exactly equivalent to DTW for the unconstrained case where there are no global constraints on the permissible warping path. We call this sparse dynamic time warping (SDTW). A careful formulation and analysis are performed to determine exactly how SDTW should treat the zero data. It is shown that SDTW reduces the computational complexity relative to DTW by about twice the sparsity ratio, which is defined as the arithmetic mean of the fraction of non-zero’s in the two time series. Numerical experiments confirm the speed advantage of SDTW relative to DTW for sparse time series with sparsity ratio up to 0.4. This study provides a benchmark and also background to potentially understand how to exploit such sparsity in the more complex case of a global constraint, or when the underlying time series are approximated to reduce complexity.
CITATION STYLE
Hwang, Y., & Gelfand, S. B. (2017). Sparse dynamic time warping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10358 LNAI, pp. 163–175). Springer Verlag. https://doi.org/10.1007/978-3-319-62416-7_12
Mendeley helps you to discover research relevant for your work.