Stereo Vision-Based Gamma-Ray Imaging for 3D Scene Data Fusion

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Modern developments of gamma-ray imagers by integrating multi-contextual sensors and advanced computer vision theories have enabled unprecedented capabilities in detection and imaging, reconstruction and mapping of radioactive sources. Notwithstanding these remarkable capabilities, the addition of multiple sensors such as light detection and ranging units (LiDAR), RGB-D sensors (Microsoft Kinect), and inertial measurement units (IMU) are mostly expensive. Instead of using such expensive sensors, we, in this paper, introduce a modest three-dimensional (3D) gamma-ray imaging method by exploiting the advancements in modern stereo vision technologies. A stereo line equation model is proposed to properly identify the distribution area of gamma-ray intensities that are used for two-dimensional (2D) visualizations. Scene data information of the surrounding environment captured at different locations are reconstructed by re-projecting disparity images created with the semi-global matching algorithm (SGM) and are merged together by employing the point-to-point iterative closest point algorithm (ICP). Instead of superimposing/overlaying 2D radioisotopes on the merged scene area, reconstructions of 2D gamma images are fused together with it to create a detailed 3D volume. Through experimental results, we try to emphasize the accuracy of our proposed fusion method.

Cite

CITATION STYLE

APA

Rathnayaka, P., Baek, S. H., & Park, S. Y. (2019). Stereo Vision-Based Gamma-Ray Imaging for 3D Scene Data Fusion. IEEE Access, 7, 89604–89613. https://doi.org/10.1109/ACCESS.2019.2926542

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free