Multiscale information fusion for hyperspectral image classification based on hybrid 2d‐3d cnn

56Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Hyperspectral images are widely used for classification due to its rich spectral information along with spatial information. To process the high dimensionality and high nonlinearity of hyper-spectral images, deep learning methods based on convolutional neural network (CNN) are widely used in hyperspectral classification applications. However, most CNN structures are stacked verti-cally in addition to using a onefold size of convolutional kernels or pooling layers, which cannot fully mine the multiscale information on the hyperspectral images. When such networks meet the practical challenge of a limited labeled hyperspectral image dataset—i.e., “small sample prob-lem”—the classification accuracy and generalization ability would be limited. In this paper, to tackle the small sample problem, we apply the semantic segmentation function to the pixel‐level hyper-spectral classification due to their comparability. A lightweight, multiscale squeeze‐and‐excitation pyramid pooling network (MSPN) is proposed. It consists of a multiscale 3D CNN module, a squeezing and excitation module, and a pyramid pooling module with 2D CNN. Such a hybrid 2D‐ 3D‐CNN MSPN framework can learn and fuse deeper hierarchical spatial–spectral features with fewer training samples. The proposed MSPN was tested on three publicly available hyperspectral classification datasets: Indian Pine, Salinas, and Pavia University. Using 5%, 0.5%, and 0.5% training samples of the three datasets, the classification accuracies of the MSPN were 96.09%, 97%, and 96.56%, respectively. In addition, we also selected the latest dataset with higher spatial resolution, named WHU‐Hi‐LongKou, as the challenge object. Using only 0.1% of the training samples, we could achieve a 97.31% classification accuracy, which is far superior to the state‐of‐the‐art hyper-spectral classification methods.

References Powered by Scopus

Distinctive image features from scale-invariant keypoints

50096Citations
N/AReaders
Get full text

Going deeper with convolutions

39860Citations
N/AReaders
Get full text

You only look once: Unified, real-time object detection

38145Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Hyperspectral and lidar data applied to the urban land cover machine learning and neural-network-based classification: A review

87Citations
N/AReaders
Get full text

A Review on Multiscale-Deep-Learning Applications

54Citations
N/AReaders
Get full text

Three-dimensional convolutional neural network model for early detection of pine wilt disease using uav-based hyperspectral images

53Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Gong, H., Li, Q., Li, C., Dai, H., He, Z., Wang, W., … Mu, T. (2021). Multiscale information fusion for hyperspectral image classification based on hybrid 2d‐3d cnn. Remote Sensing, 13(12). https://doi.org/10.3390/rs13122268

Readers over time

‘21‘22‘23‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

67%

Lecturer / Post doc 1

33%

Readers' Discipline

Tooltip

Computer Science 3

50%

Social Sciences 2

33%

Engineering 1

17%

Save time finding and organizing research with Mendeley

Sign up for free
0