Deep Recurrent Belief Propagation Network for POMDPs

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

In many real-world sequential decision-making tasks, especially in continuous control like robotic control, it is rare that the observations are perfect, that is, the sensory data could be incomplete, noisy or even dynamically polluted due to the unexpected malfunctions or intrinsic low quality of the sensors. Previous methods handle these issues in the framework of POMDPs and are either deterministic by feature memorization or stochastic by belief inference. In this paper, we present a new method that lies somewhere in the middle of the spectrum of research methodology identified above and combines the strength of both approaches. In particular, the proposed method, named Deep Recurrent Belief Propagation Network (DRBPN), takes a hybrid style belief updating procedure - an RNN-type feature extraction step followed by an analytical belief inference, significantly reducing the computational cost while faithfully capturing the complex dynamics and maintaining the necessary uncertainty for generalization. The effectiveness of the proposed method is verified on a collection of benchmark tasks, showing that our approach outperforms several state-of-the-art methods under various challenging scenarios.

Cite

CITATION STYLE

APA

Wang, Y., & Tan, X. (2021). Deep Recurrent Belief Propagation Network for POMDPs. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11B, pp. 10236–10244). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17227

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free