Multimodal Attention-Aware Convolutional Neural Networks for Classification of Hyperspectral and LiDAR Data

28Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The attention mechanism is one of the most influential ideas in the deep learning community, which has shown excellent efficiency in various computer vision tasks. Thus, this article proposes the convolution neural network method with the attention mechanism to enhance the feature extraction of light detection and ranging (LiDAR) data. Meanwhile, our elaborately designed cascaded block contains a short path architecture beneficial for multistage information exchange. With the full exploitation of elevation information from LiDAR data and efficient utilization of the spatial-spectral information underlying hyperspectral data, our method provides a novel solution for multimodal feature fusion. Experiments are conducted on the LiDAR and hyperspectral dataset provided by the 2013 IEEE GRSS Data Fusion Contest and multisource Trento dataset to demonstrate the effectiveness of the proposed method. The experimental results have shown the superior results of the proposed method on both LiDAR and multimodality remote sensing data in comparison with several popular baselines.

Cite

CITATION STYLE

APA

Zhang, H., Yao, J., Ni, L., Gao, L., & Huang, M. (2023). Multimodal Attention-Aware Convolutional Neural Networks for Classification of Hyperspectral and LiDAR Data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 16, 3635–3644. https://doi.org/10.1109/JSTARS.2022.3187730

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free