Natural Language Acquisition and Grounding for Embodied Robotic Systems

  • Alomari M
  • Duckworth P
  • Hogg D
  • et al.
N/ACitations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.

Cite

CITATION STYLE

APA

Alomari, M., Duckworth, P., Hogg, D., & Cohn, A. (2017). Natural Language Acquisition and Grounding for Embodied Robotic Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11161

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free