Multitask reinforcement learning in nondeterministic environments: Maze problem case

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In many Multi Agent Systems, under-education agents investigate their environments to discover their target(s). Any agent can also learn its strategy. In multitask learning, one agent studies a set of related problems together simultaneously, by a common model. In reinforcement learning exploration phase, it is necessary to introduce a process of trial and error to learn better rewards obtained from environment. To reach this end, anyone can typically employ the uniform pseudorandom number generator in exploration period. On the other hand, it is predictable that chaotic sources also offer a random-like series comparable to stochastic ones. It is useful in multitask reinforcement learning, to use teammate agents’ experience by doing simple interactions between each other. We employ the past experiences of agents to enhance performance of multitask learning in a nondeterministic environment. Communications are created by operators of evolutionary algorithm. In this paper we have also employed the chaotic generator in the exploration phase of reinforcement learning in a nondeterministic maze problem. We obtained interesting results in the maze problem.

Cite

CITATION STYLE

APA

Manteghi, S., Parvin, H., Heidarzadegan, A., & Nemati, Y. (2015). Multitask reinforcement learning in nondeterministic environments: Maze problem case. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9116, pp. 64–73). Springer Verlag. https://doi.org/10.1007/978-3-319-19264-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free