Hyperspectral and multispectral remote sensing image fusion based on endmember spatial information

45Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS-MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.

Cite

CITATION STYLE

APA

Feng, X., He, L., Cheng, Q., Long, X., & Yuan, Y. (2020). Hyperspectral and multispectral remote sensing image fusion based on endmember spatial information. Remote Sensing, 12(6). https://doi.org/10.3390/rs12061009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free