Extending Environments to Measure Self-reflection in Reinforcement Learning

  • Alexander S
  • Castaneda M
  • Compher K
  • et al.
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.

Cite

CITATION STYLE

APA

Alexander, S. A., Castaneda, M., Compher, K., & Martinez, O. (2022). Extending Environments to Measure Self-reflection in Reinforcement Learning. Journal of Artificial General Intelligence, 13(1), 1–24. https://doi.org/10.2478/jagi-2022-0001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free