Deep reinforcement learning using compositional representations for performing instructions

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Spoken language is one of the most efficientways to instruct robots about performing domestic tasks. However, the state of the environment has to be considered to plan and execute actions successfully. We propose a system that learns to recognise the user's intention and map it to a goal. A reinforcement learning (RL) system then generates a sequence of actions toward this goal considering the state of the environment. A novel contribution in this paper is the use of symbolic representations for both input and output of a neural Deep Q-network (DQN), which enables it to be used in a hybrid system. To show the effectiveness of our approach, the Tell-Me-Dave corpus is used to train an intention detection model and in a second step an RL agent generates the sequences of actions towards the detected objective, represented by a set of state predicates. We show that the system can successfully recognise command sequences fromthis corpus aswell as train the deep-RL network with symbolic input.We further show that the performance can be significantly increased by exploiting the symbolic representation to generate intermediate rewards.

Cite

CITATION STYLE

APA

Zamani, M. A., Magg, S., Weber, C., Wermter, S., & Fu, D. (2018). Deep reinforcement learning using compositional representations for performing instructions. Paladyn, 9(1), 358–373. https://doi.org/10.1515/pjbr-2018-0026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free