Spatial knowledge representation for human-robot interaction

30Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Non-intuitive styles of interaction between humans and mobile robots still constitute a major barrier to the wider application and acceptance of mobile robot technology. More natural interaction can only be achieved if ways are found of bridging the gap between the forms of spatial knowledge maintained by such robots and the forms of language used by humans to communicate such knowledge. In this paper, we present the beginnings of a computational model for representing spatial knowledge that is appropriate for interaction between humans and mobile robots. Work on spatial reference in human-human communication has established a range of reference systems adopted when referring to objects; we show the extent to which these strategies transfer to the human-robot situation and touch upon the problem of differing perceptual systems. Our results were obtained within an implemented kernel system which permitted the performance of experiments with human test subjects interacting with the system. We show how the results of the experiments can be used to improve the adequacy and the coverage of the system, and highlight necessary directions for future research.

Cite

CITATION STYLE

APA

Moratz, R., Tenbrink, T., Bateman, J., & Fischer, K. (2003). Spatial knowledge representation for human-robot interaction. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2685, pp. 263–286). Springer Verlag. https://doi.org/10.1007/3-540-45004-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free