Online Learning of Genetic Network Programming and its Application to Prisoner's Dilemma Game

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

A new evolutionary model with the network structure named Genetic Network Programming (GNP) has been proposed recently. GNP, that is, an expansion of GA and GP, represents solutions as a network structure and evolves it by using “offline learning (selection, mutation, crossover)”. GNP can memorize the past action sequences in the network flow, so it can deal with Partially Observable Markov Decision Process (POMDP) well. In this paper, in order to improve the ability of GNP, Q learning (an off-policy TD control algorithm) that is one of the famous online methods is introduced for online learning of GNP. Q learning is suitable for GNP because (1) in reinforcement learning, the rewards an agent will get in the future can be estimated, (2) TD control doesn't need much memory and can learn quickly, and (3) off-policy is suitable in order to search for an optimal solution independently of the policy. Finally, in the simulations, online learning of GNP is applied to a player for “Prisoner's dilemma game“ and its ability for online adaptation is confirmed. © 2003, The Institute of Electrical Engineers of Japan. All rights reserved.

Cite

CITATION STYLE

APA

Mabu, S., Hu, J., Murata, J., & Hirasawa, K. (2003). Online Learning of Genetic Network Programming and its Application to Prisoner’s Dilemma Game. IEEJ Transactions on Electronics, Information and Systems, 123(3), 535–543. https://doi.org/10.1541/ieejeiss.123.535

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free