Optimal Trade Execution Based on Deep Deterministic Policy Gradient

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we address the Optimal Trade Execution (OTE) problem over the limit order book mechanism, which is about how best to trade a given block of shares at minimal cost or for maximal return. To this end, we propose a deep reinforcement learning based solution. Though reinforcement learning has been applied to the OTE problem, this paper is the first work that explores deep reinforcement learning and achieves state of the art performance. Concretely, we develop a deep deterministic policy gradient framework that can effectively exploit comprehensive features of multiple periods of the real and volatile market. Experiments on three real market datasets show that the proposed approach significantly outperforms the existing methods, including the Submit & Leave (SL) policy (as baseline), the Q-learning algorithm, and the latest hybrid method that combines the Almgren-Chriss model and reinforcement learning.

Cite

CITATION STYLE

APA

Ye, Z., Deng, W., Zhou, S., Xu, Y., & Guan, J. (2020). Optimal Trade Execution Based on Deep Deterministic Policy Gradient. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12112 LNCS, pp. 638–654). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59410-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free