Spectral-spatial classification of hyperspectral images has been the subject of many studies in recent years. When there are only a few labeled pixels for training and a skewed class label distribution, this task becomes very challenging because of the increased risk of overfitting when training a classifier. In this paper, we show that in this setting, a convolutional neural network with a single hidden layer can achieve state-of-the-art performance when three tricks are used: a spectral-locality-aware regularization term and smoothing- and label-based data augmentation. The shallow network architecture prevents overfitting in the presence of many features and few training samples. The locality-aware regularization forces neighboring wavelengths to have similar contributions to the features generated during training. The new data augmentation procedure favors the selection of pixels in smaller classes, which is beneficial for skewed class label distributions. The accuracy of the proposed method is assessed on five publicly available hyperspectral images, where it achieves state-of-the-art results. As other spectral-spatial classification methods, we use the entire image (labeled and unlabeled pixels) to infer the class of its unlabeled pixels. To investigate the positive bias induced by the use of the entire image, we propose a new learning setting where unlabeled pixels are not used for building the classifier. Results show the beneficial effect of the proposed tricks also in this setting and substantiate the advantages of using labeled and unlabeled pixels from the image for hyperspectral image classification.
CITATION STYLE
Acquarelli, J., Marchiori, E., Buydens, L. M. C., Tran, T., & van Laarhoven, T. (2018). Spectral-spatial classification of hyperspectral images: Three tricks and a new learning setting. Remote Sensing, 10(7). https://doi.org/10.3390/rs10071156
Mendeley helps you to discover research relevant for your work.