Bayesian Inference for Training of Long Short Term Memory Models in Chaotic Time Series Forecasting

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For time series forecasting, obtaining models is based on the use of past observations from the same sequence. In those cases, when the model is learning from data, there is not an extra information that discuss about the quantity of noise inside the data available. In practice, it is necessary to deal with finite noisy datasets, which lead to uncertainty about the propriety of the model. For this problem, the employment of the Bayesian inference tools are preferable. A modified algorithm used for training a long-short term memory recurrent neural network for time series forecasting is presented. This approach was chosen to improve the forecasting of the original series, employing an implementation based on the minimization of the associated Kullback-Leibler Information Criterion. For comparison, a nonlinear autoregressive model implemented with a feedforward neural network was also presented. A simulation study was conducted to evaluate and illustrate results, comparing this approach with Bayesian neural-networks-based algorithms for artificial chaotic time-series and showing an improvement in terms of forecasting errors.

Cite

CITATION STYLE

APA

Rivero, C. R., Pucheta, J., Patiño, D., Puglisi, J. L., Otaño, P., Franco, L., … Orjuela-Cañón, A. D. (2019). Bayesian Inference for Training of Long Short Term Memory Models in Chaotic Time Series Forecasting. In Communications in Computer and Information Science (Vol. 1096 CCIS, pp. 197–208). Springer. https://doi.org/10.1007/978-3-030-36211-9_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free