Deep feature-action processing with mixture of updates

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper explores the possibility of combining an actor and critic in one architecture and uses a mixture of updates to train them. It describes a model for robot navigation that uses architecture similar to an actor-critic reinforcement learning architecture. It sets up the actor as a layer seconded by another layer which deduce the value function. Therefore, the effect is to have similar to a critic outcome combined with the actor in one network. The model hence can be used as the base for a truly deep reinforcement learning architecture that can be explored in the future. More importantly this work explores the results of mixing conjugate gradient update with gradient update for the mentioned architecture. The reward signal is back propagated from the critic to the actor through conjugate gradient eligibility trace for the second layer combined with gradient eligibility trace for the first layer. We show that this mixture of updates seems to work well for this model. The features layer have been deeply trained by applying a simple PCA on the whole set of images histograms acquired during the first running episode. The model is also able to adapt to a reduced features dimension autonomously. Initial experimental result on real robot shows that the agent accomplished good success rate in reaching a goal location.

Cite

CITATION STYLE

APA

Altahhan, A. (2015). Deep feature-action processing with mixture of updates. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9492, pp. 1–10). Springer Verlag. https://doi.org/10.1007/978-3-319-26561-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free