Reinforcement learning with a corrupted reward channel

36Citations
Citations of this article
157Readers
Mendeley users who have this article in their library.

Abstract

No real-world reward function is perfect. Sensory errors and software bugs may result in agents getting higher (or lower) rewards than they should. For example, a reinforcement learning agent may prefer states where a sensory error gives it the maximum reward, but where the true reward is actually small. We formalise this problem as a generalised Markov Decision Problem called Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under strong simplifying assumptions and when trying to compensate for the possibly corrupt rewards. Two ways around the problem are investigated. First, by giving the agent richer data, such as in inverse reinforcement learning and semi-supervised reinforcement learning, reward corruption stemming from systematic sensory errors may sometimes be completely managed. Second, by using randomisation to blunt the agent's optimisation, reward corruption can be partially managed under some assumptions.

Cite

CITATION STYLE

APA

Everitt, T., Krakovna, V., Orseau, L., & Legg, S. (2017). Reinforcement learning with a corrupted reward channel. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 4705–4713). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/656

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free