A Dimensionality Reduction Method for the Fusion of NIR and Visible Image

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fusing Near-infrared (NIR) and visible images aim to provide more detailed images for human inspection or other computer vision applications. Image fusion relies heavily on two components: measuring activity levels and assigning weights. The paper employs principal component analysis network (PCANET) to fuse Near-infrared (NIR) and visible data using an image pyramid. First, utilize a PCANET, an ultra-compact neural network for deep learning, to quantify action and give relative importance to infrared and visible images. PCANET's functional level assessment is superior in expressing key concepts such as NIR object processing and detailed visual representation. Second, the image pyramid divides the weights and source images into several scales and then applies the weighted-average fusing algorithm to the combined data. In the end, reconstruction produces the fused image. More than eighty test image pairings across two datasets confirmed the efficacy of the proposed approach. The experimental results reveal that the proposed strategy beats six benchmark approaches for subjective user evaluations and objective assessment metrics.

Cite

CITATION STYLE

APA

Gopinath, L., & Ruhan Bevi, A. (2023). A Dimensionality Reduction Method for the Fusion of NIR and Visible Image. In Lecture Notes in Networks and Systems (Vol. 798 LNNS, pp. 629–645). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-7093-3_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free