Multimodal Sensor Fusion in Single Thermal Image Super-Resolution

13Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the fast growth in the visual surveillance and security sectors, thermal infrared images have become increasingly necessary in a large variety of industrial applications. This is true even though IR sensors are still more expensive than their RGB counterpart having the same resolution. In this paper, we propose a deep learning solution to enhance the thermal image resolution. The following results are given: (I) Introduction of a multimodal, visual-thermal fusion model that addresses thermal image super-resolution, via integrating high-frequency information from the visual image. (II) Investigation of different network architecture schemes in the literature, their up-sampling methods, learning procedures, and their optimization functions by showing their beneficial contribution to the super-resolution problem. (III) A benchmark ULB17-VT dataset that contains thermal images and their visual images counterpart is presented. (IV) Presentation of a qualitative evaluation of a large test set with 58 samples and 22 raters which shows that our proposed model performs better against state-of-the-arts.

Cite

CITATION STYLE

APA

Almasri, F., & Debeir, O. (2019). Multimodal Sensor Fusion in Single Thermal Image Super-Resolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11367 LNCS, pp. 418–433). Springer Verlag. https://doi.org/10.1007/978-3-030-21074-8_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free