Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing

63Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fields of computer vision and has been used for remote sensing. However, most works focus on the feature maps of the last convolution layer and pay little attention to the benefits of additional layers. In fact, the feature information hidden in different layers has potential for feature discrimination capacity. The most attention of this work is how to explore the potential of multiple layers from a CNN model. Therefore, this paper proposes multi-layers feature fusion based on CNN and designs a fusion module to solve relevant issues of fusion. In this module, firstly, all the feature maps are transformed to match sizes mutually due to infeasible fusion of feature maps with different scales; then, two fusion methods are introduced to integrate feature maps from different layers instead of the last convolution layer only; finally, the fusion of features are delivered to the next layer or classifier as the routine CNN does. The experimental results show that the suggested methods achieve promising performance on public datasets.

Cite

CITATION STYLE

APA

Ma, C., Mu, X., & Sha, D. (2019). Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing. IEEE Access, 7, 121685–121694. https://doi.org/10.1109/ACCESS.2019.2936215

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free