Efficient baseline-free sampling in parameter exploring policy gradients: Super symmetric PGPE

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Policy Gradient methods that explore directly in parameter space are among the most effective and robust direct policy search methods and have drawn a lot of attention lately. The basic method from this field, Policy Gradients with Parameter-based Exploration, uses two samples that are symmetric around the current hypothesis to circumvent misleading reward in asymmetrical reward distributed problems gathered with the usual baseline approach. The exploration parameters are still updated by a baseline approach - leaving the exploration prone to asymmetric reward distributions. In this paper we will show how the exploration parameters can be sampled quasi symmetric despite having limited instead of free parameters for exploration. We give a transformation approximation to get quasi symmetric samples with respect to the exploration without changing the overall sampling distribution. Finally we will demonstrate that sampling symmetrically also for the exploration parameters is superior in needs of samples and robustness than the original sampling approach. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Sehnke, F. (2013). Efficient baseline-free sampling in parameter exploring policy gradients: Super symmetric PGPE. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8131 LNCS, pp. 130–137). https://doi.org/10.1007/978-3-642-40728-4_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free