Rotation invariance regularization for remote sensing image scene classification with convolutional neural networks

31Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Deep convolutional neural networks (DCNNs) have shown significant improvements in remote sensing image scene classification for powerful feature representations. However, because of the high variance and volume limitations of the available remote sensing datasets, DCNNs are prone to overfit the data used for their training. To address this problem, this paper proposes a novel scene classification framework based on a deep Siamese convolutional network with rotation invariance regularization. Specifically, we design a data augmentation strategy for the Siamese model to learn a rotation invariance DCNN model that is achieved by directly enforcing the labels of the training samples before and after rotating to be mapped close to each other. In addition to the cross-entropy cost function for the traditional CNN models, we impose a rotation invariance regularization constraint on the objective function of our proposed model. The experimental results obtained using three publicly-available scene classification datasets show that the proposed method can generally improve the classification performance by 2~3% and achieves satisfactory classification performance compared with some state-of-the-art methods.

Cite

CITATION STYLE

APA

Qi, K., Yang, C., Hu, C., Shen, Y., Shen, S., & Wu, H. (2021). Rotation invariance regularization for remote sensing image scene classification with convolutional neural networks. Remote Sensing, 13(4), 1–23. https://doi.org/10.3390/rs13040569

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free