Abstract
We develop a framework for risk-sensitive behaviour in reinforcement learning (RL) due to uncertainty about the environment dynamics by leveraging utility-based definitions of risk sensitivity. In this framework, the preference for risk can be tuned by varying the utility function, for which we develop dynamic programming (DP) and policy gradient-based algorithms. The risk-averse behavior is compared with the behavior of risk-neutral policy in environments with epistemic risk.
Cite
CITATION STYLE
Eriksson, H., & Dimitrakakis, C. (2020). Epistemic risk-sensitive reinforcement learning. In ESANN 2020 - Proceedings, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 339–344). ESANN (i6doc.com).
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.