Averaged-A3C for asynchronous deep reinforcement learning

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, Deep Reinforcement Learning (DRL) has achieved unprecedented success in high-dimensional and large-scale space tasks. However, instability and variability of DRL algorithms have an important effect on their performance. To alleviate this problem, the Asynchronous Advantage Actor-Critic (A3C) algorithm uses the advantage function to update the policy and value network, but there still remains a certain variance in the advantage function. Aiming to reduce the variance of the advantage function, we propose a new A3C algorithm called Averaged Asynchronous Advantage Actor-Critic (Averaged-A3C). Averaged-A3C is an extension of the A3C algorithm, by averaging previously learned state value estimates to calculate the advantage function, which contributes to a more stable training procedure and improved performance. We evaluate the performance of the new algorithm through some games on the Atari 2600 and MuJoCo environment. Experimental results show that the Averaged-A3C algorithm effectively improves the performance of Agent and the stability of training process compared to the original A3C algorithm.

Cite

CITATION STYLE

APA

Chen, S., Zhang, X. F., Wu, J. J., & Liu, D. (2018). Averaged-A3C for asynchronous deep reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11303 LNCS, pp. 277–288). Springer Verlag. https://doi.org/10.1007/978-3-030-04182-3_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free