Online Attention Enhanced Differential and Decomposed LSTM for Time Series Prediction

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Due to the time variability and bursty of data, accurate and lag-free time series prediction is difficult and challenging. To address these problems, we propose an online attention enhanced differential and decomposed LSTM (Long Short Term Memory) model called OADDL, which can better capture the comprehensive core features and important structures of time series. In this model, the core features of the time series are first generated through differential and decomposition methods to reduce data complexity and remove noisy data. Then, the self-attention module and LSTM capture the full time core features and important structures of time series. Finally, FCN (Fully Connected Network) fuses the omnidirectional features of time series. Meanwhile, we design an online two-stage training mode for this model, in which attention enhanced LSTM and FCN models are sequentially trained, and the training set and model hyper-parameters are continuously updated over time, thus further capturing the time-varying and burst characteristics of time series. We conduct tests on three typical datasets, and the experimental results show that compared with latest typical deep learning models, OADDL can more accurately predict time series data and effectively alleviate the problem of prediction lag.

Cite

CITATION STYLE

APA

Li, L., Huang, S., Liu, G., Luo, C., Yu, Q., & Li, N. (2024). Online Attention Enhanced Differential and Decomposed LSTM for Time Series Prediction. IEEE Access, 12, 62416–62428. https://doi.org/10.1109/ACCESS.2024.3395651

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free