Fusing vision and LIDAR - Synchronization, correction and occlusion reasoning

  • Schneider S
  • Himmelsbach M
  • Luettel T
 et al. 
  • 60

    Readers

    Mendeley users who have this article in their library.
  • 21

    Citations

    Citations of this article.

Abstract

Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors. The problem has been addressed from both, the computer vision community as well as from researchers working with laser range finding technology, like the Velodyne HDL-64. Since cameras and LIDAR sensors complement one another in terms of color and depth perception, the fusion of both sensors is reasonable in order to provide color images with depth and reflectance information as well as 3D LIDAR point clouds with color information. In this paper we propose a method for sensor synchronization, especially designed for dynamic scenes, a low-level fusion of the data of both sensors and we provide a solution for the occlusion problem that arises in conjunction with different viewpoints of the fusioned sensors.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Sebastian Schneider

  • Michael Himmelsbach

  • Thorsten Luettel

  • Hans Joachim Wuensche

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free