Model-based visual self-localization using gaussian spheres

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A novel model-based approach for global self-localization using active stereo vision and density Gaussian spheres is presented. The proposed object recognition components deliver noisy percept subgraphs, which are filtered and fused into an ego-centered reference frame. In subsequent stages, the required vision-to-model associations are extracted by selecting ego-percept subsets in order to prune and match the corresponding world-model subgraph. Ideally, these coupled subgraphs hold necessary information to obtain the model-to-world transformation, i.e., the pose of the robot. However, the estimation of the pose is not robust due to the uncertainties introduced when recovering Euclidean metric from images and during the mapping from the camera to the ego-center. The approach models the uncertainty of the percepts with a radial normal distribution. This formulation allows a closed-form solution which not only derives the maximal density position depicting the optimal ego-center but also ensures the solution even in situations where pure geometric spheres might not intersect. © 2010 Springer-Verlag London Limited.

Cite

CITATION STYLE

APA

Gonzalez-Aguirre, D., Asfour, T., Bayro-Corrochano, E., & Dillmann, R. (2010). Model-based visual self-localization using gaussian spheres. In Geometric Algebra Computing: in Engineering and Computer Science (pp. 299–324). Springer London. https://doi.org/10.1007/978-1-84996-108-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free