The inconsistency between the freely available remote sensing datasets and crowd-sourced data from the resolution perspective forms a big challenge in the context of data fusion. In classical classification problems, crowd-sourced data are represented as points that may or not be located within the same pixel. This discrepancy can result in having mixed pixels that could be unjustly classified. Moreover, it leads to failure in retaining sufficient level of details from data inferences. In this paper we propose a method that can preserve detailed inferences from remote sensing datasets accompanied with crowd-sourced data. We show that advanced machine learning techniques can be utilized towards this objective. The proposed method relies on two steps, firstly we enhance the spatial resolution of the satellite image using Convolutional Neural Networks and secondly we fuse the crowd-sourced data with the upscaled version of the satellite image. However, the covered scope in this paper is concerning the first step. Results show that CNN can enhance Landsat 8 scenes resolution visually and quantitatively.
CITATION STYLE
Ghaffar, M. A. A., Vu, T. T., & Maul, T. H. (2017). Multi-modal remote sensing data fusion framework. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives (Vol. 42, pp. 85–89). International Society for Photogrammetry and Remote Sensing. https://doi.org/10.5194/isprs-archives-XLII-4-W2-85-2017
Mendeley helps you to discover research relevant for your work.