Causal Deep Reinforcement Learning Using Observational Data

5Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep reinforcement learning (DRL) requires the collection of interventional data, which is sometimes expensive and even unethical in the real world, such as in the autonomous driving and the medical field. Offline reinforcement learning promises to alleviate this issue by exploiting the vast amount of observational data available in the real world. However, observational data may mislead the learning agent to undesirable outcomes if the behavior policy that generates the data depends on unobserved random variables (i.e., confounders). In this paper, we propose two deconfounding methods in DRL to address this problem. The methods first calculate the importance degree of different samples based on the causal inference technique, and then adjust the impact of different samples on the loss function by reweighting or resampling the offline dataset to ensure its unbiasedness. These deconfounding methods can be flexibly combined with existing model-free DRL algorithms such as soft actor-critic and deep Q-learning, provided that a weak condition can be satisfied by the loss functions of these algorithms. We prove the effectiveness of our deconfounding methods and validate them experimentally.

Cite

CITATION STYLE

APA

Zhu, W., Yu, C., & Zhang, Q. (2023). Causal Deep Reinforcement Learning Using Observational Data. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2023-August, pp. 4711–4719). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2023/524

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free