Grounding symbols in multi-modal instructions

3Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.

Abstract

As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability-for instance, learning to ground symbols in the physical world. Realistically, this task must cope with small datasets consisting of a particular users' contextual assignment of meaning to terms. We present a method for processing a raw stream of cross-modal input- i.e., linguistic instructions, visual perception of a scene and a concurrent trace of 3D eye tracking fixations-to produce the segmentation of objects with a correspondent association to high-level concepts. To test our framework we present experiments in a table-top object manipulation scenario. Our results show our model learns the user's notion of colour and shape from a small number of physical demonstrations, generalising to identifying physical referents for novel combinations of the words.

Cite

CITATION STYLE

APA

Hristov, Y., Penkov, S., Lascarides, A., & Ramamoorthy, S. (2017). Grounding symbols in multi-modal instructions. In Proceedings of the 1st Workshop on Language Grounding for Robotics, RoboNLP 2017 at the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 (pp. 49–57). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-2807

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free