Quantum Deep Q-Learning with Distributed Prioritized Experience Replay

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces the QDQN-DPER framework to enhance the efficiency of quantum reinforcement learning (QRL) in solving sequential decision tasks. The framework incorporates prioritized experience replay, asynchronous training and novel matrix loss into the training algorithm to reduce the high sampling complexities. Numerical simulations demonstrate that QDQN-DPER outperforms the baseline distributed quantum Q-learning with the same model architecture. The proposed framework holds potential for more complex tasks while maintaining training efficiency.

Cite

CITATION STYLE

APA

Chen, S. Y. C. (2023). Quantum Deep Q-Learning with Distributed Prioritized Experience Replay. In Proceedings - 2023 IEEE International Conference on Quantum Computing and Engineering, QCE 2023 (Vol. 2, pp. 31–35). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/QCE57702.2023.10180

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free