Risk-averse Distributional Reinforcement Learning: A CVaR Optimization Approach

1Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conditional Value-at-Risk (CVaR) is a well-known measure of risk that has been directly equated to robustness, an important component of Artificial Intelligence (AI) safety. In this paper we focus on optimizing CVaR in the context of Reinforcement Learning (RL), as opposed to the usual risk-neutral expectation. As a first original contribution, we improve the CVaR Value Iteration algorithm (Chow et al., 2015) in a way that reduces computational complexity of the original algorithm from polynomial to linear time. Secondly, we propose a sampling version of CVaR Value Iteration we call CVaR Q-learning. We also derive a distributional policy improvement algorithm, and later use it as a heuristic for extracting the optimal policy from the converged CVaR Q-learning algorithm. Finally, to show the scalability of our method, we propose an approximate Qlearning algorithm by reformulating the CVaR Temporal Difference update rule as a loss function which we later use in a deep learning context. All proposed methods are experimentally analyzed, including the Deep CVaR Q-learning agent which learns how to avoid risk from raw pixels.

Cite

CITATION STYLE

APA

Stanko, S., & Macek, K. (2019). Risk-averse Distributional Reinforcement Learning: A CVaR Optimization Approach. In International Joint Conference on Computational Intelligence (Vol. 1, pp. 412–423). Science and Technology Publications, Lda. https://doi.org/10.5220/0008175604120423

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free