Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A key impediment to reinforcement learning (RL) in real applications with limited, batch data is in defining a reward function that reflects what we implicitly know about reasonable behaviour for a task and allows for robust off-policy evaluation. In this work, we develop a method to identify an admissible set of reward functions for policies that (a) do not deviate too far in performance from prior behaviour, and (b) can be evaluated with high confidence, given only a collection of past trajectories. Together, these ensure that we avoid proposing unreasonable policies in high-risk settings. We demonstrate our approach to reward design on synthetic domains as well as in a critical care context, to guide the design of a reward function that consolidates clinical objectives to learn a policy for weaning patients from mechanical ventilation.

Cite

CITATION STYLE

APA

Prasad, N., Engelhardt, B., & Doshi-Velez, F. (2020). Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning. In ACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning (pp. 1–9). Association for Computing Machinery, Inc. https://doi.org/10.1145/3368555.3384450

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free