Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots

22Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. © 2010 by the authors.

Cite

CITATION STYLE

APA

Losada, C., Mazo, M., Palazuelos, S., Pizarro, D., & Marrón, M. (2010). Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots. Sensors, 10(4), 3261–3279. https://doi.org/10.3390/s100403261

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free