Ship classification based on attention mechanism and multi-scale convolutional neural network for visible and infrared images

18Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Visible image quality is very susceptible to changes in illumination, and there are limitations in ship classification using images acquired by a single sensor. This study proposes a ship classification method based on an attention mechanism and multi-scale convolutional neural network (MSCNN) for visible and infrared images. First, the features of visible and infrared images are extracted by a two-stream symmetric multi-scale convolutional neural network module, and then concatenated to make full use of the complementary features present in multi-modal images. After that, the attention mechanism is applied to the concatenated fusion features to emphasize local details areas in the feature map, aiming to further improve feature representation capability of the model. Lastly, attention weights and the original concatenated fusion features are added element by element and fed into fully connected layers and Softmax output layer for final classification output. Effectiveness of the proposed method is verified on a visible and infrared spectra (VAIS) dataset, which shows 93.81% accuracy in classification results. Compared with other state-of-the-art methods, the proposed method could extract features more effectively and has better overall classification performance.

Cite

CITATION STYLE

APA

Ren, Y., Yang, J., Guo, Z., Zhang, Q., & Cao, H. (2020). Ship classification based on attention mechanism and multi-scale convolutional neural network for visible and infrared images. Electronics (Switzerland), 9(12), 1–20. https://doi.org/10.3390/electronics9122022

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free