To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.
CITATION STYLE
Kleinschmidt, S. P., & Wagner, B. (2018). Spatial fusion of different imaging technologies using a virtual multimodal camera. In Lecture Notes in Electrical Engineering (Vol. 430, pp. 153–174). Springer Verlag. https://doi.org/10.1007/978-3-319-55011-4_8
Mendeley helps you to discover research relevant for your work.