A new approach for combining time-of-flight and RGB cameras based on depth-dependent planar projective transformations

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

© 2015 by the authors; licensee MDPI, Basel, Switzerland. Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method.

Cite

CITATION STYLE

APA

Salinas, C., Fernández, R., Montes, H., & Armada, M. (2015). A new approach for combining time-of-flight and RGB cameras based on depth-dependent planar projective transformations. Sensors (Switzerland), 15(9), 24615–24643. https://doi.org/10.3390/s150924615

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free