Learning Agents with Prioritization and Parameter Noise in Continuous State and Action Space

N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Among the many variants of RL, an important class of problems is where the state and action spaces are continuous—autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we introduce a prioritized form of a combination of state-of-the-art approaches such as Deep Q-learning (DQN) and Deep Deterministic Policy Gradient (DDPG) to outperform the earlier results for continuous state and action space problems. Our experiments also involve the use of parameter noise during training resulting in more robust deep RL models outperforming the earlier results significantly. We believe these results are a valuable addition for continuous state and action space problems.

Cite

CITATION STYLE

APA

Mangannavar, R., & Srinivasaraghavan, G. (2019). Learning Agents with Prioritization and Parameter Noise in Continuous State and Action Space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11554 LNCS, pp. 202–212). Springer Verlag. https://doi.org/10.1007/978-3-030-22796-8_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free