Human-robot collaborative scene mapping from relational descriptions

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this article we propose a method for cooperatively building a scene map between a human and a robot by using a spatial relational model employed by the robot to interpret human descriptions of the scene. The description will consist in a set of spatial relations between the objects in the scene. The scene map will contain the position of these objects. For this end we propose a model based on the generation of scalar fields of applicability for each of the available relations. The method can be summarized as follows. In first place a person will come into the room and describe the scene to the robot, including in the description semantic information about the objects which the robot can’t get from its sensors. From the description the robot will form the “scene mental map”. In second place the robot will sense the scene with a 2D range laser building the “scene sensed map”. The objects positions in the mental map will be used to guide the sensing process. In a third step the robot will fuse the two maps, linking the semantic information about the described objects to the corresponding sensed ones. The resulting map is called the “scene enriched map”.

Cite

CITATION STYLE

APA

Carrión, E. R., & Sanfeliu, A. (2014). Human-robot collaborative scene mapping from relational descriptions. In Advances in Intelligent Systems and Computing (Vol. 252, pp. 331–346). Springer Verlag. https://doi.org/10.1007/978-3-319-03413-3_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free