This paper introduces the QDQN-DPER framework to enhance the efficiency of quantum reinforcement learning (QRL) in solving sequential decision tasks. The framework incorporates prioritized experience replay, asynchronous training and novel matrix loss into the training algorithm to reduce the high sampling complexities. Numerical simulations demonstrate that QDQN-DPER outperforms the baseline distributed quantum Q-learning with the same model architecture. The proposed framework holds potential for more complex tasks while maintaining training efficiency.
CITATION STYLE
Chen, S. Y. C. (2023). Quantum Deep Q-Learning with Distributed Prioritized Experience Replay. In Proceedings - 2023 IEEE International Conference on Quantum Computing and Engineering, QCE 2023 (Vol. 2, pp. 31–35). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/QCE57702.2023.10180
Mendeley helps you to discover research relevant for your work.