Human three-dimensional modeling based on intelligent sensor fusion for a tele-operated mobile robot

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we discuss a robot vision in order to perceive humans and the environment around a mobile robot. We developed a tele-operated mobile robot with a pan-tilt mechanism composed of a camera and a laser range finder (LRF). The output from the camera is color information, and the output of LRF is distance information to objects from the robot Tn this paper, we propose a method of sensor fusion to extract a human from the measured data by integrating these outputs based on the concept of synthesis. Finally, we show experimental results of the proposed method. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Kubota, N., Satomi, M., Taniguchi, K., & Nogawa, Y. (2007). Human three-dimensional modeling based on intelligent sensor fusion for a tele-operated mobile robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4694 LNAI, pp. 98–106). Springer Verlag. https://doi.org/10.1007/978-3-540-74829-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free