Abstract
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent’s exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
Cite
CITATION STYLE
Misra, D., Langford, J., & Artzi, Y. (2017). Mapping instructions and visual observations to actions with reinforcement learning. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1004–1015). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1106
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.