Seeing the cloud and then understanding the weather is one of the important means for people to forecast weather. There has been a certain progress in the use of deep learning technology for weather forecasting, especially in the automatic understanding of disaster weather from satellite image, which can be seen as the image classification problem. Publicly available satellite image benchmark database tries to link weather directly with satellite images. However, single image modal is far from enough to correctly identify weather systems and clouds. Thus, we integrate images with meteorological elements, in which five kinds of meteorological elements, such as season, month, date stamp, and geographic longitude, and latitude, are labeled. To effectively use such various modalities for clouds and weather systems identification through satellite image classification tasks, we propose a new satellite image classification framework: multimodal auxiliary network (MANET). MANET consists of three parts: image feature extraction module based on convolutional neural network, meteorological information feature extraction module based on perceptron, and layer-level multimodal fusion. MANET successfully integrates the multimodal information, including meteorological elements and satellite images. The experimental results show that MANET can achieve better weather systems and clouds and land cover classification results based on satellite images.
CITATION STYLE
Bai, C., Zhao, D., Zhang, M., & Zhang, J. (2022). Multimodal Information Fusion for Weather Systems and Clouds Identification From Satellite Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 7333–7345. https://doi.org/10.1109/JSTARS.2022.3202246
Mendeley helps you to discover research relevant for your work.