Self-Supervised Attention-Aware Reinforcement Learning

22Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.

Cite

CITATION STYLE

APA

Wu, H., Khetarpal, K., & Precup, D. (2021). Self-Supervised Attention-Aware Reinforcement Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12A, pp. 10311–10319). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17235

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free