We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available. During practice, the environment may differ from the one available for training and evaluation with extrinsic rewards. We refer to this setup of alternating periods of practice and objective evaluation as practice-match, drawing an analogy to regimes of skill acquisition common for humans in sports and games. The agent must effectively use periods in the practice environment so that performance improves during matches. In the proposed method the intrinsic practice reward is learned through a meta-gradient approach that adapts the practice reward parameters to reduce the extrinsic match reward loss computed from matches. We illustrate the method on a simple grid world, and evaluate it in two games in which the practice environment differs from match: Pong with practice against a wall without an opponent, and PacMan with practice in a maze without ghosts. The results show gains from learning in practice in addition to match periods over learning in matches only.
CITATION STYLE
Rajendran, J., Lewis, R., Veeriah, V., Lee, H., & Singh, S. (2020). How should an agent practice? In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5454–5461). AAAI press. https://doi.org/10.1609/aaai.v34i04.5995
Mendeley helps you to discover research relevant for your work.