Task planning in “block world” with deep reinforcement learning

8Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

At the moment reinforcement learning have advanced significantly with discovering new techniques and instruments for training. This paper is devoted to the application convolutional and recurrent neural networks in the task of planning with reinforcement learning problem. The aim of the work is to check whether the neural networks are fit for this problem. During the experiments in a block environment the task was to move blocks to obtain the final arrangement which was the target. Significant part of the problem is connected with the determining on the reward function and how the results are depending in reward’s calculation. The current results show that without modifying the initial problem into more straightforward ones neural networks didn’t demonstrate stable learning process. In the paper a modified reward function with sub-targets and euclidian reward calculation was used for more precise reward determination. Results have shown that none of the tested architectures were not able to achieve goal.

Cite

CITATION STYLE

APA

Ayunts, E., & Panov, A. I. (2017). Task planning in “block world” with deep reinforcement learning. In Advances in Intelligent Systems and Computing (Vol. 636, pp. 3–9). Springer Verlag. https://doi.org/10.1007/978-3-319-63940-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free