Standard computer vision systems assume access to intelligently captured inputs (e.g., photos from a human photographer), yet autonomously capturing good observations is a major challenge in itself. We address the problem of learning to look around: How can an agent learn to acquire informative visual observations? We propose a reinforcement learning solution, where the agent is rewarded for reducing its uncertainty about the unobserved portions of its environment. Specifically, the agent is trained to select a short sequence of glimpses, after which it must infer the appearance of its full environment. To address the challenge of sparse rewards, we further introduce sidekick policy learning, which exploits the asymmetry in observability between training and test time. The proposed methods learned observation policies that not only performed the completion task for which they were trained but also generalized to exhibit useful “look-around” behavior for a range of active perception tasks.
CITATION STYLE
Ramakrishnan, S. K., Jayaraman, D., & Grauman, K. (2019). Emergence of exploratory look-around behaviors through active observation completion. Science Robotics, 4(30). https://doi.org/10.1126/scirobotics.aaw6326
Mendeley helps you to discover research relevant for your work.