Insights into lstm fully convolutional networks for time series classification

154Citations
Citations of this article
296Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Long short-Term memory fully convolutional neural networks (LSTM-FCNs) and Attention LSTM-FCN (ALSTM-FCN) have shown to achieve the state-of-The-Art performance on the task of classifying time series signals on the old University of California-Riverside (UCR) time series repository. However, there has been no study on why LSTM-FCN and ALSTM-FCN perform well. In this paper, we perform a series of ablation tests (3627 experiments) on the LSTM-FCN and ALSTM-FCN to provide a better understanding of the model and each of its sub-modules. The results from the ablation tests on the ALSTM-FCN and LSTM-FCN show that the LSTM and the FCN blocks perform better when applied in a conjoined manner. Two z-normalizing techniques, z-normalizing each sample independently and z-normalizing the whole dataset, are compared using a Wilcoxson signed-rank test to show a statistical difference in performance. In addition, we provide an understanding of the impact dimension shuffle that has on LSTM-FCN by comparing its performance with LSTM-FCN when no dimension shuffle is applied. Finally, we demonstrate the performance of the LSTM-FCN when the LSTM block is replaced by a gated recurrent unit (GRU), basic neural network (RNN), and dense block.

Cite

CITATION STYLE

APA

Karim, F., Majumdar, S., & Darabi, H. (2019). Insights into lstm fully convolutional networks for time series classification. IEEE Access, 7, 67718–67725. https://doi.org/10.1109/ACCESS.2019.2916828

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free