Epistemic risk-sensitive reinforcement learning

ArXiv: 1906.06273
13Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

We develop a framework for risk-sensitive behaviour in reinforcement learning (RL) due to uncertainty about the environment dynamics by leveraging utility-based definitions of risk sensitivity. In this framework, the preference for risk can be tuned by varying the utility function, for which we develop dynamic programming (DP) and policy gradient-based algorithms. The risk-averse behavior is compared with the behavior of risk-neutral policy in environments with epistemic risk.

Cite

CITATION STYLE

APA

Eriksson, H., & Dimitrakakis, C. (2020). Epistemic risk-sensitive reinforcement learning. In ESANN 2020 - Proceedings, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 339–344). ESANN (i6doc.com).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free