Improving multi-step prediction of learned time series models

162Citations
Citations of this article
222Readers
Mendeley users who have this article in their library.

Abstract

Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a "demonstrator" by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DaD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction.

Cite

CITATION STYLE

APA

Venkatraman, A., Hebert, M., & Bagnell, J. A. (2015). Improving multi-step prediction of learned time series models. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 3024–3030). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9590

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free