Scheduled policy optimization for natural language communication with intelligent agents

2Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

We investigate the task of learning to interpret natural language instructions by jointly reasoning with visual observations and language inputs. Unlike current methods which start with learning from demonstrations (LfD) and then use reinforcement learning (RL) to fine-tune the model parameters, we propose a novel policy optimization algorithm which can dynamically schedule demonstration learning and RL. The proposed training paradigm provides efficient exploration and better generalization beyond existing methods. Comparing to existing ensemble models, the best single model based on our proposed method tremendously decreases the execution error by over 50% on a block-world environment. To further illustrate the exploration strategy of our RL algorithm, our paper includes systematic studies on the evolution of policy entropy during training.

Cite

CITATION STYLE

APA

Xiong, W., Guo, X., Yu, M., Chang, S., Zhou, B., & Wang, W. Y. (2018). Scheduled policy optimization for natural language communication with intelligent agents. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 4503–4509). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/626

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free