An actor-critic-attention mechanism for deep reinforcement learning in multi-view environments

6Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

In reinforcement learning algorithms, leveraging multiple views of the environment can improve the learning of complicated policies. In multi-view environments, due to the fact that the views may frequently suffer from partial observability, their level of importance are often different. In this paper, we propose a deep reinforcement learning method and an attention mechanism in a multi-view environment. Each view can provide various representative information about the environment. Through our attention mechanism, our method generates a single feature representation of environment given its multiple views. It learns a policy to dynamically attend to each view based on its importance in the decision-making process. Through experiments, we show that our method outperforms its state-of-the-art baselines on TORCS racing car simulator and three other complex 3D environments with obstacles. We also provide experimental results to evaluate the performance of our method on noisy conditions and partial observation settings.

Cite

CITATION STYLE

APA

Barati, E., & Chen, X. (2019). An actor-critic-attention mechanism for deep reinforcement learning in multi-view environments. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 2002–2008). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/277

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free