Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution

9Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Optimal execution is a sequential decision-making problem for cost-saving in algorithmic trading. Studies have found that reinforcement learning (RL) can help decide the order-splitting sizes. However, a problem remains unsolved: how to place limit orders at appropriate limit prices? The key challenge lies in the “continuous-discrete duality” of the action space. On the one hand, the continuous action space using percentage changes in prices is preferred for generalization. On the other hand, the trader eventually needs to choose limit prices discretely due to the existence of the tick size, which requires specialization for every single stock with different characteristics (e.g., the liquidity and the price range). So we need continuous control for generalization and discrete control for specialization. To this end, we propose a hybrid RL method to combine the advantages of both of them. We first use a continuous control agent to scope an action subset, then deploy a fine-grained agent to choose a specific limit price. Extensive experiments show that our method has higher sample efficiency and better training stability than existing RL algorithms and significantly outperforms previous learning-based methods for order execution.

Cite

CITATION STYLE

APA

Pan, F., Zhang, T., Luo, L., He, J., & Liu, S. (2022). Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3912–3918). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/543

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free