Reinforcement learning with non-markovian rewards

69Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

The standard RL world model is that of a Markov Decision Process (MDP). A basic premise of MDPs is that the rewards depend on the last state and action only. Yet, many real-world rewards are non-Markovian. For example, a reward for bringing coffee only if requested earlier and not yet served, is non-Markovian if the state only records current requests and deliveries. Past work considered the problem of modeling and solving MDPs with non-Markovian rewards (NMR), but we know of no principled approaches for RL with NMR. Here, we address the problem of policy learning from experience with such rewards. We describe and evaluate empirically four combinations of the classical RL algorithm Q-learning and R-max with automata learning algorithms to obtain new RL algorithms for domains with NMR. We also prove that some of these variants converge to an optimal policy in the limit.

Cite

CITATION STYLE

APA

Gaon, M., & Brafman, R. I. (2020). Reinforcement learning with non-markovian rewards. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 3980–3987). AAAI press. https://doi.org/10.1609/aaai.v34i04.5814

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free