Deep learning models combining spectral and spatial features have been proven to be effective for hyperspectral image (HSI) classification. However, most spatial feature integration methods only consider a single input spatial scale regardless of various shapes and sizes of objects over the image plane, leading to missing scale-dependent information. In this paper, we propose a hierarchical multi-scale convolutional neural networks (CNNs) with auxiliary classifiers (HMCNN-AC) to learn hierarchical multi-scale spectral–spatial features for HSI classification. First, to better exploit the spatial information, multi-scale image patches for each pixel are generated at different spatial scales. These multi-scale patches are all centered at the same central spectrum but with shrunken spatial scales. Then, we apply multi-scale CNNs to extract spectral–spatial features from each scale patch. The obtained multi-scale convolutional features are considered as structured sequential data with spectral–spatial dependency, and a bidirectional LSTM is proposed to capture the correlation and extract a hierarchical representation for each pixel. To better train the whole network, weighted auxiliary classifiers are employed for the multi-scale CNNs and optimized together with the main loss function. Experimental results on three public HSI datasets demonstrate the superiority of our proposed framework over some state-of-the-art methods.
CITATION STYLE
Li, S., Zhu, X., & Bao, J. (2019). Hierarchical multi-scale convolutional neural networks for hyperspectral image classification. Sensors (Switzerland), 19(7). https://doi.org/10.3390/s19071714
Mendeley helps you to discover research relevant for your work.