A Novel Deep Learning Method for Thermal to Annotated Thermal-Optical Fused Images

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Thermal Images profile the passive radiation of objects and capture them in grayscale images. Such images have a very different distribution of data compared to optical colored images. We present here a work that produces a grayscale thermo-optical fused mask given a thermal input. This is a deep learning based pioneering work since to the best of our knowledge, there exists no other work which produces a mask from a single thermal infrared input image. Our method is also unique in the sense that the deep learning method we are proposing here employ the Discrete Wavelet Transform (DWT) domain instead of the gray level domain. As a part of this work, we also prepared a new and unique database for obtaining the region of interest in thermal images, which have been manually annotated to denote the Region of Interest on 5 different classes of real world images. Finally, we are proposing a simple low cost overhead statistical measure for identifying the region of interest in fused images, which we call as the Region of Fusion (RoF). Experiments on 2 different databases show encouraging results in identifying the region of interest in the fused images. We also show that these images can be processed better in the mixed form rather than with only thermal images.

Cite

CITATION STYLE

APA

Goswami, S., Singh, S. K., & Chaudhuri, B. B. (2023). A Novel Deep Learning Method for Thermal to Annotated Thermal-Optical Fused Images. In Communications in Computer and Information Science (Vol. 1776 CCIS, pp. 664–681). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-31407-0_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free