Regime-switching recurrent reinforcement learning in automated trading

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The regime-switching recurrent reinforcement learning (RSRRL) model was first presented in [19], in the form of a GARCH-based threshold version that extended the standard RRL algorithm developed by [22]. In this study, the main aim is to investigate the influence of different transition variables, in multiple RSRRL settings and for various datasets, and compare and contrast the performance levels of the RRL and RSRRL systems in algorithmic trading experiments. The transition variables considered are GARCH-based volatility, detrended volume, and the rate of information arrival, the latter being modelled on the Mixture Distribution Hypothesis (MDH). A frictionless setting was assumed for all the experiments. The results showed that the RSRRL models yield higher Sharpe ratios than the standard RRL in-sample, but struggle to reproduce the same performance levels out-of-sample. We argue that the lack of in- and out-of-sample correlation is due to a drastic change in market conditions, and find that the RSRRL can consistently outperform the RRL only when certain conditions are present. We also find that trading volume presents a lot of promise as an indicator, and could be the way forward for the design of more sophisticated RSRRL systems. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Maringer, D., & Ramtohul, T. (2011). Regime-switching recurrent reinforcement learning in automated trading. Studies in Computational Intelligence, 380, 93–121. https://doi.org/10.1007/978-3-642-23336-4_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free