Deep multimodal fusion for ground-based cloud classification in weather station networks

23Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Most existing methods only utilize the visual sensors for ground-based cloud classification, which neglects other important characteristics of cloud. In this paper, we utilize the multimodal information collected from weather station networks for ground-based cloud classification and propose a novel method named deep multimodal fusion (DMF). In order to learn the visual features, we train a convolutional neural network (CNN) model to obtain the sum convolutional map (SCM) by using a pooling operation across all the feature maps in deep layers. Afterwards, we employ a weighted strategy to integrate the visual features with multimodal features. We validate the effectiveness of the proposed DMF on the multimodal ground-based cloud (MGC) dataset, and the experimental results demonstrate the proposed DMF achieves better results than the state-of-the-art methods.

Cite

CITATION STYLE

APA

Liu, S., & Li, M. (2018). Deep multimodal fusion for ground-based cloud classification in weather station networks. Eurasip Journal on Wireless Communications and Networking, 2018(1). https://doi.org/10.1186/s13638-018-1062-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free