Multimodal ground-based cloud classification using joint fusion convolutional neural network

49Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

The accurate ground-based cloud classification is a challenging task and still under development. The most current methods are limited to only taking the cloud visual features into consideration, which is not robust to the environmental factors. In this paper, we present the novel joint fusion convolutional neural network (JFCNN) to integrate the multimodal information for ground-based cloud classification. To learn the heterogeneous features (visual features and multimodal features) from the ground-based cloud data, we designed the proposed JFCNN as a two-stream structure which contains the vision subnetwork and multimodal subnetwork. We also proposed a novel layer named joint fusion layer to jointly learn two kinds of cloud features under one framework. After training the proposed JFCNN, we extracted the visual and multimodal features from the two subnetworks and integrated them using a weighted strategy. The proposed JFCNN was validated on the multimodal ground-based cloud (MGC) dataset and achieved remarkable performance, demonstrating its effectiveness for ground-based cloud classification task.

Cite

CITATION STYLE

APA

Liu, S., Li, M., Zhang, Z., Xiao, B., & Cao, X. (2018). Multimodal ground-based cloud classification using joint fusion convolutional neural network. Remote Sensing, 10(6). https://doi.org/10.3390/rs10060822

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free