On the analysis of the depth error on the road plane for monocular vision-based robot navigation

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A mobile robot equipped with a single camera can take images at different locations to obtain the 3D information of the environment for navigation. The depth information perceived by the robot is critical for obstacle avoidance. Given a calibrated camera, the accuracy of depth computation largely depends on locations where images have been taken. For any given image pair, the depth error in regions close to the camera baseline can be excessively large or even infinite due to the degeneracy introduced by the triangulation in depth computation. Unfortunately, this region often overlaps with the robot's moving direction, which could lead to collisions. To deal with the issue, we analyze depth computation and propose a predictive depth error model as a function of motion parameters. We name the region where the depth error is above a given threshold as an untrusted area. Note that the robot needs to know how its motion affect depth error distribution beforehand, we propose a closed-form model predicting how the untrusted area is distributed on the road plane for given robot/camera positions. The analytical results have been successfully verified in the experiments using a mobile robot. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Song, D., Lee, H., & Yi, J. (2010). On the analysis of the depth error on the road plane for monocular vision-based robot navigation. In Springer Tracts in Advanced Robotics (Vol. 57, pp. 301–315). https://doi.org/10.1007/978-3-642-00312-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free